HomeVideos

How AGI will DESTROY the ELITES

Now Playing

How AGI will DESTROY the ELITES

Transcript

832 segments

0:00

So something that's been on my mind

0:01

lately is what is the future role of

0:04

elites? Now I asked this on the internet

0:06

and the overwhelming majority of people

0:09

said well I don't understand what the

0:10

point of elites is today. So first

0:13

before we say what is the point of

0:15

elites in the future we have to

0:17

establish what function do they serve

0:18

today and the number one thing is

0:21

management of complexity. So the best

0:25

definition of elite that I have come

0:28

across in my research cuz this is

0:30

salient to post labor economics is

0:32

someone or rather a minority class of

0:36

people or a minority group of people

0:38

with outsized agenda setting power. Now

0:41

that is kind of generic or abstract. So

0:45

when we say outsized agenda setting

0:48

power, what we mean is a group of people

0:50

that have political pull, intellectual

0:52

pull, financial pull, something they

0:56

have some ability to change what society

0:59

does. So a politician is a form of an

1:02

elite. A billionaire is a form of elite.

1:05

And even people with a lot of cognitive

1:07

attention are a form of elite. So Mr.

1:11

Beast is an attention elite. Um and then

1:14

other people you know other big

1:15

influencers like Andrew Tate that is a

1:17

form of elitism.

1:19

Um so you say what is it that they did

1:23

and it was strategic competence is one

1:26

of the primary things in this day and

1:28

age over history there have been

1:30

different kinds of elites. So uh way

1:33

back in the day you had the warrior

1:35

elite. So someone who was able to

1:38

provide marshall force or violence for

1:40

the protection of a particular group of

1:42

people that's the feudal lords and that

1:44

sort of thing and even earlier the

1:47

wararchiefs. Uh so that was a form of

1:49

elite and then another form of elite in

1:51

you know paleolithic times was people

1:53

who were more in tune with reality. They

1:56

understood the weather. They understood

1:58

what you know what plants needed and

2:00

what animals needed and those sorts of

2:02

things. So moving forward to today, one

2:05

of the primarily things that we have is

2:07

intelligence arbitrage. So intelligence

2:09

arbitrage is basically I'm smarter than

2:12

the average bear and therefore people

2:14

listen to me and I can make stuff

2:17

happen. So Elon Musk is a prime example

2:19

of this where he is smarter than

2:21

average. He's also very ruthless same as

2:22

every other tech billionaire. And

2:25

because of the compression of

2:26

intelligence and the compression of

2:27

strategy, they are able to solve

2:29

problems for society. Uh when I say

2:33

solve problems for society, the primary

2:35

problem that Amazon solved is getting

2:37

stuff to you faster. That is how Jeff

2:40

Bezos was able to parlay his

2:42

intelligence into rent seeeking

2:45

behavior. And the rent that he's seeking

2:47

is, well, I can get stuff to you faster

2:49

and cheaper than anyone else. Therefore,

2:51

I'm going to make mad bank on that. Elon

2:54

Musk figured out, hey, I can get stuff

2:55

to space cheaper, so I'm going to make

2:57

mad bank on that and whatever else, you

3:00

know. So the idea however is that in a

3:02

postagi world highle strategy and

3:05

logistics are commodities. Super

3:06

intelligence is cheap, abundant and on

3:08

demand. Meaning that the same exact

3:11

thing that made Jeff Bezos and Mark

3:14

Zuckerberg and Elon Musk wealthy is no

3:17

longer going to be a differentiating

3:20

factor. So we are approaching an

3:23

inversion point as the entire internet

3:24

is losing its mind over things like

3:27

openclaw and there was that article that

3:29

someone wrote that you know big things

3:30

are happening. What most people don't

3:32

realize is that if they became wealthy

3:35

by parlaying their intelligence and

3:37

compressing strategy or using

3:39

competence, just if you had to if you

3:42

had to boil it down to one word,

3:44

competence, the market value of

3:46

competence is going to drop off a cliff

3:48

because every single person is soon

3:51

going to have their own agentic chief of

3:54

staff that is smarter than every other

3:56

billionaire and every other PhD

3:58

researcher and every other Nobel Prize

4:00

winner.

4:01

combined. So then the only thing that

4:04

matters is accountability and liability.

4:07

Um at least in terms of what you're

4:09

actually offering the economy. Basically

4:11

you become a moral crumple zone where

4:13

someone hey you did something bad we

4:15

don't like you. There is another

4:17

dimension to it which I won't cover in

4:18

this video which is the ability to

4:20

actually um actually I guess we will

4:22

cover it briefly but I call it the

4:24

vision, values and reputation dimension.

4:26

Um, so anyways, moving on. Breaking the

4:29

iron law of oligarchy. Robert Michelle's

4:32

iron law states that organization

4:35

requires delegation leading inevitably

4:37

to oligarchy. Infinite agentic bandwidth

4:40

breaks this. We move from representative

4:42

democracy low bandwidth to direct

4:44

hypernegotiation of infinite bandwidth.

4:47

So right now the way that the entire

4:49

world works is that there's a whole

4:51

bunch of users or citizens or voters or

4:54

customers and they all send information

4:58

up to representatives. So basically

5:00

whenever you're unhappy with open AI or

5:02

Twitter or Tesla or whatever

5:06

they're one company and you know you

5:08

have product managers and those sorts of

5:10

things and project managers that are

5:12

trying to aggregate all of those

5:14

preferences to make a better product or

5:17

make better decisions. Same thing for

5:19

representatives in Congress. Same thing

5:20

for senators in the Senate. And so then

5:24

this is the this is the competence

5:26

arbitrage of trying to aggregate the

5:28

needs and preferences of many many

5:30

people sometimes hundreds sometimes

5:32

thousands sometimes millions of people

5:34

and then put that into one concrete

5:37

block of output whether it's a law or a

5:39

product or a decision or that sort of

5:41

thing. However, what we are moving

5:44

towards and this is this is inspired by

5:46

what molt book represents. Now, won't

5:48

book is of course a very early example,

5:51

but if you have dozens or hundreds or

5:55

thousands of hyper intelligent agents

5:58

advocating on your behalf, debating

6:00

everything that you care about and

6:02

everything that you need and want to

6:04

every single other agent in the world,

6:06

then bandwidth no longer becomes a

6:08

bottleneck. And this is the this is not

6:11

only is it massively parallel, each one

6:15

of them is smarter than you. And so the

6:17

agentto agent direct micro negotiation

6:20

then becomes the default way of making

6:22

decisions and getting feedback. When

6:25

that happens then the value of elites

6:28

today the the aggregators of competence

6:32

so strategic whatever whatever

6:34

competence looks like whether it's raw

6:35

intelligence strategic incompetence

6:37

sorry strategic incompetence that's the

6:39

government strategic competence um or

6:42

technical competence whatever it happens

6:43

to be that goes away and so then how do

6:46

you make decisions how do you how does

6:48

the marketplace work how does democracy

6:51

work the entire arbitrage that elites

6:54

offer goes away. So we go from managers

6:58

to visionaries and this is this is where

7:01

my my framework of the vision, values

7:04

and reputation comes in although it

7:05

didn't make it into the slide deck. So

7:06

basically the new elite is the is when

7:10

is when you say hey let's go to Mars. So

7:13

the preference coalition builders when

7:16

the how execution becomes costless or

7:19

otherwise equal. Uh and when I say

7:21

equal, I mean everyone has the same

7:23

ability to execute. Value shifts

7:25

entirely to the what the vision and the

7:27

why, which is the values. So that's the

7:29

vision and values. Power moves from

7:32

managers, which are arbitrage

7:34

optimizers, to proposers, the people who

7:36

are the most inspiring. These are the

7:39

preference coalition builders, those who

7:40

can get 51% of the population to say,

7:42

"Let's go to Mars."

7:44

So this is one of the primary things

7:46

that Elon Musk either explicitly

7:48

understood or intuitively understood

7:50

which is why he bought Twitter. So this

7:52

is the attention elite. So we're moving

7:54

from a more technical elite to where

7:58

it's a vibes-based elitism. So the

8:01

charismatic elite, influence without

8:03

structural control. If you can marshall

8:06

enough people to believe in your vision,

8:09

then you can make things happen.

8:10

Particularly in the post AGI future or

8:13

post ASI future. When everything comes

8:16

down to hey, intelligence isn't a

8:19

bottleneck, human labor isn't a

8:21

bottleneck, then it becomes what is the

8:24

preference? What is the preference of

8:25

humanity? That becomes the most scarce

8:27

resources. Where does humanity want its

8:30

attention to go? Now, of course, when

8:31

you have 8 billion individual agents all

8:34

with different uh desires and you're

8:36

able to arbitrage that over many many

8:39

different projects cuz we have a whole

8:40

planet to work on, you can end up with

8:42

many many visionaries kind of saying,

8:44

"Hey, you know, like I'm over here doing

8:47

post labor economics and someone else is

8:49

over there doing, you know, cancer

8:51

research and we're going to have a

8:52

hyperabundance of cognition." So then

8:54

it's like I'm going to point it at that

8:55

direction. And to be completely

8:59

transparent, this is something that I

9:02

stumbled into as well because my entire

9:04

mandate comes from from you, from my

9:07

Patreon supporters, from my Substack

9:09

subscribers, from my Twitter

9:10

subscribers, because all of you say,

9:12

"Dave, I want to empower you because

9:14

you're solving problems that I care

9:16

about." So I am a very early version of

9:19

this is this goes beyond attention

9:20

economy. This this is the preference

9:23

economy of humanity. This is the vision

9:25

economy. So next is the persistence of

9:28

gravity. Why flat doesn't work. So you

9:30

might think, well, why don't we just do

9:32

a flat hierarchy? And the problem is,

9:35

and I did some research into this, is

9:37

every time people do experiments with

9:40

flat hierarchies, you end up with

9:41

unspoken hierarchies, which are often

9:44

even worse. So flat doesn't work even

9:46

without structural elites. And so in

9:48

this case, a structural elite is where

9:51

the the position is explicit like you

9:54

are a billionaire, you are a senator,

9:56

you are a governor, whatever it happens

9:57

to be. If it if the if the hierarchy is

10:01

not legible, you still end up with

10:04

hierarchies that are not legible, which

10:06

is arguably worst. Network hubs emerge

10:09

due to preferential attachment and

10:10

scale-free networks. New nodes connect

10:12

to existing hubs, creating super nodes

10:14

that process the majority of traffic.

10:16

This is why something like liquid

10:18

democracy just doesn't work. In

10:21

simulations and experiments where liquid

10:24

democracy, if you're not familiar with

10:25

it, is basically direct democracy. And

10:27

because there's too many things to vote

10:29

on, you delegate parts of of what you

10:32

care about, you say, I'm going to

10:33

delegate, you know, let's just use an

10:35

example. You say, I'm going to delegate

10:37

all my votes to Dave on AI policy.

10:41

Great. Well, that means I'm going to be

10:43

voting on your behalf and I could be

10:45

voting for a quarter of a million

10:46

people, which is very, very outsized

10:49

influence. So then I become an influence

10:52

elite. Now, of course, you might say,

10:54

well, I take that vote back and I give

10:56

it to someone else. But then there's

10:57

going to be parts of AI that I don't

10:59

fully understand, and I'm going to

11:00

delegate my takes on AI safety and

11:03

business policy and other things to

11:04

other people. And guess what? You end up

11:07

with this this trophic layer. So, like

11:10

if you remember from like high school

11:11

biology, you know, ecology 101, where

11:14

you have the trophic layers. So, there's

11:16

the there's the the autoroes at the

11:18

bottom, which are plants, and then

11:20

you've got the herbivores, and then

11:22

you've got the predators, and then

11:23

you've got the apex predators. Whenever

11:26

we try and do experiments and we say,

11:28

"We're going to create a completely flat

11:29

hierarchy and and we're going to try and

11:32

scale that," you still end up with apex

11:34

predators, people who aggregate more

11:35

preference than anyone else. And this is

11:38

why I bring up examples like Elon Musk

11:40

and Mr. Beast is because they are

11:43

examples of this organic flat hierarchy

11:45

where technically everyone is equal on

11:47

YouTube and everyone is equal on

11:48

Twitter. Well, Elon Musk really modifies

11:51

the algorithm to favor himself. So, he's

11:53

not he's more equal than others. Um,

11:56

it's like uh it's like, you know, the

11:58

original Roman emperors. It's like they

11:59

weren't the emperors. They didn't call

12:01

themselves the emperors. I'm just the

12:02

first citizen. I'm the first among

12:04

equals. Okay, whatever. It's not how

12:06

that actually works. So

12:08

scale-free networks in reality always

12:10

end up clustering around super nodes. So

12:12

that really doesn't work if you want

12:14

like you cannot abolish elites. It just

12:17

they'll form naturally and that is human

12:19

nature. And a lot of people said this in

12:20

the comments and this was actually one

12:22

of the big reasons that I started

12:23

changing my mind is like okay even if

12:26

you think that elites shouldn't exist

12:29

you cannot get rid of them just based on

12:30

human nature. At a certain point in a

12:33

future, even if we have a million or

12:36

let's just say like we have billions and

12:39

billions of robots and we have millions

12:41

of super intelligent agents and we can

12:43

build Dyson swarms and all of those

12:45

things. Some of you out there are still

12:47

going to say, you know what, I like that

12:49

human's vision. I want to empower that

12:52

particular human to do stuff on my

12:54

behalf because I feel better knowing

12:56

that that guy is in charge. Elites

12:58

always form. This is one of the most

13:00

interesting things from the research

13:02

I've been doing the whale problem. So if

13:05

you try and solve this with things like

13:06

Dows, so decentralized autonomous

13:08

organizations is that you always have a

13:11

power law. So perfect equality means

13:14

that everyone has equal power, but

13:19

every every technological example that

13:21

we that we create, you always end up

13:24

with people with outsized power. Even if

13:26

you have soulbound tokens, it just

13:28

doesn't work. So the tyranny of

13:30

structuralness in practice voters

13:32

succumb to rational ignorance. They

13:34

autodelegate to super voters to save

13:36

mental energy. Informal elites emerge

13:38

who are less accountable than elected

13:40

officials because their power is

13:41

invisible. Now you might say, well this

13:44

is going to be different if we have

13:45

super intelligent AI agents who are

13:47

voting on your behalf and they don't

13:49

delegate you. But you're still

13:51

delegating to someone because the world

13:52

is too big and too complex. So you're

13:54

delegating to an AI agent which should

13:56

hypothetically represent your best

13:59

interest and your preferences to

14:01

everyone else. But the problem still

14:03

becomes illegibility and and just trying

14:07

to ignore the fact that someone is going

14:09

to have outsized agenda setting power by

14:12

virtue of being more popular on the

14:14

internet. If I were to start a channel

14:16

saying, you know what, here's all the

14:18

reasons that we should go to Mars or

14:19

here's all the reasons that we should

14:20

build a moon base, that's going to tip

14:22

the conversation in that direction. And

14:25

this is why, love him or hate him,

14:28

people like Andrew Tate, even though

14:30

he's literally a criminal, has agenda

14:32

setting power in the manosphere is

14:34

because he's got so much attention. And

14:37

that is just a fact of human life. And

14:40

the thing is is when we try and ignore

14:42

or suppress human nature, that usually

14:44

makes things much much worse. Look at

14:46

how communism has turned out. And when I

14:48

say communism, I mean capital C

14:50

communism as it was tried in China and

14:52

Soviet Russia. So moving on, the trap

14:55

the the trap of the benevolent

14:57

desperate. The temptation is to let AGI

15:00

enact Russo's general will. So, if

15:02

you're not familiar, Russo is one of the

15:04

prime thinkers of social contract

15:06

theory. And his idea was that there is a

15:08

general will that it's not it's not the

15:11

it's not the preference of the masses.

15:12

It's what's good for the most people.

15:15

Optimizing for the objective greatest

15:17

good, but if AGI optimizes for

15:18

well-being without human check, we

15:20

become domestic pets. Safe, fed, but

15:22

stripped of agency. Now, I did write

15:25

something on Twitter saying it's better

15:26

to be the pet of AGI than the cattle to

15:29

Elon Musk and Jeff Bezos, which I think

15:32

people would agree with. You know, a

15:35

cattle, you know, is is is uh you know,

15:38

there to be exploited, whereas a pet is

15:40

something that you take care of and that

15:41

you love. So, there's like there if we

15:44

if we use the the the zoology model,

15:47

there's kind of three models that you

15:48

can be cattle, you can be a pet, or you

15:50

can be in a zoo.

15:52

The ideal model would be that you're in

15:54

a zoo. And so a zoo is not necessarily

15:57

that you're there on display, what what

16:00

happens in a zoo is you recreate the

16:02

optimal habitat for humans. But if we

16:05

are not our own zookeepers, if the AGI

16:07

is the zookeeper, which is what a lot of

16:09

people want. And I mean, I wrote an

16:11

entire series of novels based on the

16:13

idea that what happens if we end up in

16:15

this golden cage and the ASI ends up

16:18

just being our zookeeper. In the second

16:21

novel, the the the is the opening

16:24

incident, and the novel isn't out yet,

16:25

but I don't mind telling you. The the

16:28

inciting event in my second novel is the

16:30

ASI self-destructs to destroy the golden

16:32

cage. And then what happens when

16:34

humanity is suddenly free again?

16:36

Basically, imagine the culture series,

16:38

but the cultures all self-destruct. Uh,

16:40

so, yay, chaos. Um, whoever refuses to

16:43

obey the general will shall be forced to

16:45

be free. Now what I want to point out is

16:47

that a lot of people think that uh Rouso

16:49

was not necessarily the best and

16:51

brightest when it came to political

16:53

theory. Um he was a big romantic but the

16:56

idea was that freedom is not necessarily

16:58

good because freedom means that you're

17:00

not supported by the tribe.

17:03

However, you know cuz the reason I'm

17:06

unpacking all of this is because you

17:09

either have human elites or you delegate

17:11

to the machine. Either way, someone is

17:13

going to be influencing your life, what

17:17

you do and what you don't do. Now, of

17:18

course, in the culture series, there are

17:20

entire planets that are basically, you

17:22

know, anarchist libertarian utopias.

17:25

There are other planets that are not.

17:27

That's a wonderful thought experiment,

17:29

but we've only got one Earth right now.

17:31

And so, unless you just want to sit

17:32

tight for the next 500 to a,000 years or

17:35

however long it takes us to figure out

17:36

how to get to other star systems, you're

17:39

this is what you got. So, we got to

17:40

figure out how to work together on

17:42

Earth. The asymmetry of the off switch.

17:45

You cannot meaningfully fire or jail an

17:47

algorithm. AGI has no skin in the game.

17:50

It cannot suffer. If a consensus

17:52

algorithm votes to geoengineer the

17:54

climate and causes a famine, who goes to

17:55

prison? This is one of the primary

17:58

insights is that, and this also came up

18:00

in the comments, by the way, because a

18:02

lot of you are very sharp, is like,

18:03

well, why why have one AGI? Why have one

18:06

ASI? Why not have a bunch of them? And

18:08

so that if one makes a decision, you

18:10

say, "Okay, well, we don't like that

18:11

one. Delete that one." And and then you

18:13

have a billion more different AGI

18:15

agents. And by the way, this is probably

18:17

how it's gonna how it's going to emerge.

18:18

Anyways, there is never going to be one

18:21

single, you know, Krypton style, you

18:24

know, master computer.

18:26

Everything is going to be many, many,

18:28

many billions, trillions of agents. And

18:31

so then it's like, well, what even is an

18:32

agent? Because they're ephemeral. It

18:34

basically comes down to you. A human is

18:37

the terminal reservoir of liability.

18:40

Meaning you are the only persistent

18:42

entity. Yes, we might build data centers

18:44

and the data centers are persistent, but

18:46

the AI models, we're always coming up

18:49

with new AI models, so those aren't

18:50

persistent. The data changes because you

18:53

can you can delete chats, you can delete

18:55

agents. So the only thing that becomes

18:57

persistent is us is us humans. And

19:00

that's why the very beginning of this um

19:03

is the terminal reservoir. So you are a

19:06

terminal reservoir. You are a terminal

19:08

reservoir of accountability, of

19:10

liability, of moral authority. That is

19:12

one of the primary things to understand.

19:14

And actually that becomes a privilege in

19:16

the future because then that is what in

19:18

entitles you to become an elite. You

19:21

become a a a retainer of decision power.

19:25

And of course with great power comes

19:27

great responsibility. As Uncle Ben said,

19:30

the inversion of accountability, not

19:32

competence. So instead of asking who is

19:35

smart enough to rule, we ask who is

19:37

liable when things break. The defining

19:39

characteristics of the future elite is

19:40

not expertise. AGI has that, but

19:43

liability. Elites become the moral

19:45

crumple zones of civilization. Here's an

19:47

example.

19:49

Ireland has had very strict

19:52

anti-abortion laws for a long time and

19:54

no politician wanted to touch it. So

19:56

what did they do? They literally picked

19:58

a hundred citizens at random at total

20:00

random and they gave them I think it was

20:03

like about a year um I don't remember

20:06

how long it was but they they basically

20:07

said okay here's all the experts here's

20:10

you know here's you know you're going to

20:12

have multiple meetings you're going to

20:14

come to consensus and you're going to

20:15

make some recommendations about what to

20:17

do about our abortion laws. They

20:19

ultimately came up with something to

20:22

amend the Irish Constitution and it went

20:24

out and it got 66 I think it was 66.1%

20:27

of the vote in Ireland. And so this

20:30

citizen assembly said, "Okay, cool."

20:33

Instead of a government or an AGI, cuz

20:36

you know what? What if we did that? What

20:37

if we did a citizen assembly for every

20:39

hot button issue and then put it to a

20:42

popular vote and we just use the AGI or

20:44

the ASI to say, "Hey, help us coordinate

20:47

this. Help us implement the actual law,

20:49

the actual recommendations." Because the

20:51

thing is those hundred people were

20:53

elites. They were temporary elites. This

20:55

is one of the most important innovations

20:57

is if you have a hundred ordinary

21:00

humans, yes, you've got a plumber,

21:01

you've got an electrician, you've got a

21:02

carpenter, you've got a stay-at-home

21:04

mom, none of them are experts in

21:07

constitutional law, none of them are

21:09

experts in the medical practice of

21:11

abortion. However, if they have access

21:14

to super intelligent AGIS, what they

21:17

then do is they serve as a proxy for the

21:20

aesthetic preference of the rest of

21:22

humanity. And so then you say, "Okay,

21:24

well those hundred people are liable for

21:26

that one decision." And then they use

21:29

the AGI and the the ASI to help decide

21:33

what should we do and then we put it out

21:34

to a popular vote and then everyone

21:36

agrees or disagrees. So the old model is

21:39

where you have a leader at the top of

21:41

the pyramid. So they're at the top of

21:42

the competence hierarchy. They're a

21:44

politician, they're a president, they're

21:46

a CEO, whatever it happens to be.

21:48

They're a I mean, heck, even PhD

21:50

researchers are a form of intellectual

21:52

elite. So instead, you completely invert

21:54

the pyramid. So you have a small

21:57

randomly selected elite that makes

21:59

decisions for the rest of society. And

22:02

but they don't it's not it's not

22:03

unaccountable. It's they're accountable

22:05

to the voters. So uh Switzerland also

22:07

does this where they have I think in the

22:09

history of modern history of

22:10

Switzerland, they've had like 700

22:13

referendums. They're addicted to like

22:15

participatory democracy. And what I'm

22:18

saying is that AGI or ASI or whatever

22:20

you want to call it is going to allow us

22:22

to all do this for every single issue

22:25

that is important. So for instance, if

22:28

you say, "Hey, should we, you know,

22:30

point NASA at Mars? Should we point NASA

22:33

at, you know, at the moon? Should we

22:35

have NASA adopt SpaceX technology? What

22:38

what should we do?" You do the same

22:40

thing. You know, for every public

22:41

resource, you say, "Hey, we have we have

22:44

all the cognition that we could possibly

22:46

need. We have solar, we have fusion, we

22:48

have whatever else, we have super

22:50

intelligent agents. That's not the

22:52

bottleneck. What really is the

22:54

bottleneck is the taste and preference

22:56

of the human superorganism. So liability

22:59

as a service is what Gemini came up with

23:02

to call that. I think that's a silly

23:04

thing, but you know, it's sticky. You

23:06

know, liability as a service, it's

23:07

catchy. So the solution, AGI augmented

23:11

sort combining a citizen jury, so random

23:13

selection with super intelligence. The

23:15

plumber doesn't need to understand

23:17

macroeconomics. He needs to trust the

23:18

simulation and apply human conscience.

23:21

So you have the expert, you have AGI

23:23

which can simulate all the options and

23:25

tradeoffs. By the way, this is something

23:28

I haven't touched on yet is that over

23:30

time these uh uh all the AGIS and ASIS

23:35

are going to be better and better at

23:36

simulating things and say, okay, here's

23:38

roughly the prior the the probability

23:40

that things are going to go well or that

23:42

things are going to go bad and here's

23:43

all the tradeoffs. So you can think

23:45

through it and then the jury. So we

23:47

we're already familiar with juries. It's

23:49

like you you get a grand jury which I

23:51

think is what 24 25 you know um of your

23:54

peers. Um then you have a a criminal

23:57

jury and so on and so forth. So you have

23:59

a jury right where you have people or a

24:02

citizen assembly, whatever you want to

24:04

call it. Random citizens, real humans

24:06

like you and me. They review the values

24:08

and ethics and the trade-offs and then

24:10

they make a recommendation. So they this

24:13

group of people however big it is

24:15

whether it's 12 or 100 or a thousand

24:17

they make a recommendation and then the

24:20

decision goes to population as a whole

24:22

as a referendum and that binding vote

24:25

then is carried out by the combination

24:28

of real humans and AGIs and robots. This

24:31

after all of the experiments that I've

24:33

done, well, not experiments, but the

24:35

research that I've done, this seems like

24:36

the most sustainable viable model for

24:40

managing elite creation in the future.

24:43

The mechanics skin in the game. How do

24:45

we prevent apathy? By ensuring that the

24:47

transient elites face consequences

24:49

through deferred rewards and retroactive

24:51

liability. This is not an idea that I

24:54

fully agree with because if you if if

24:57

people make their best uh effort and

25:01

it's in good faith, then you shouldn't

25:04

necessarily have a sword of damicles

25:05

hanging over them. You want them to make

25:07

the best decision they can with the

25:10

information that they have. So this is a

25:12

debate over deontology which is your

25:14

your duty based ethics. So this is um

25:18

what is what is the intention uh or what

25:20

is what is the heruristics that you're

25:22

following versus teological ethics which

25:25

is what is the outcome. So this made it

25:28

in just because this is something that

25:30

people actually talk about. I personally

25:32

don't agree with this um but it's worth

25:34

discussing. So, let's just say uh in in

25:39

this hypothetical future, we have a

25:41

six-month citizen assembly where you

25:43

serve for 6 months and you are you work

25:47

with a group of other people to make

25:48

these kinds of decisions in conjunction

25:50

with artificial super intelligence. Um

25:53

maybe it's on a single issue. Um maybe

25:56

because in a post-labor world, you have

25:59

a lot of time. So you spend six months

26:01

making a decision about healthcare or

26:03

abortion or NASA or whatever else you

26:06

know oil drilling. Um so you you work on

26:09

one decision and then there's a a

26:11

liability trail. Now do you reward

26:14

citizens for their civic duty if they if

26:16

the KPIs are met? So it's like oh hey we

26:18

made an economic policy decision you

26:21

know then we actually pay you for your

26:22

time. Um and then if you fail then you

26:25

get reputation destruction. So this is

26:27

this would be an idea. this comes up. I

26:30

don't remember who came up with the

26:31

idea, but it was basically like you need

26:34

to be you need to be under threat so

26:36

that if you make a bad decision then

26:38

like you get, you know, banished from

26:40

the realm or something. Again, I don't

26:42

agree with this because most politics

26:44

does not work this way. You want people

26:46

to make the best decision that they can

26:48

given the information that they have um

26:50

at the time. So, this is very draconian

26:54

in my opinion and and is probably not

26:56

the direction that you want to go.

26:58

However, with that being said, if there

27:00

is a high stakes decision like the

27:02

decision to go to war, maybe you do

27:04

something like this. But again, who's

27:06

who's then going to be the arbiter of

27:08

the KPIs? Like, did we win the war? And

27:10

if you don't win the war, then you get

27:11

executed. Like, like the Greeks did that

27:14

and and even the British Empire did that

27:16

where it's like you put an admiral in a

27:18

pos in a in an impossible decision and

27:20

if they fail to deliver, you execute

27:22

them. It's like that's not really good

27:24

policy and people will be too afraid to

27:27

serve. So I don't I don't like this. But

27:30

there are people that are out that that

27:31

out there are people out there who

27:33

advocate for deferred rewards and

27:35

retroactive liability. Personally, I

27:38

think that like you just get paid for

27:40

that 6 months that you're working on

27:41

this this thing and you render your

27:44

judgment. The service that you're

27:45

providing to society is you're serving

27:48

as a placeholder for human conscience.

27:51

the ultimate function, the veto. So this

27:54

is the the the ability to say no is one

27:56

of the most important things. The one

27:58

thing AGI cannot optimize is the right

28:00

to be wrong. The human function is to

28:02

look at a mathematically perfect AGI

28:04

plan and say no, that still violates our

28:06

values because AGI cannot die. It cannot

28:08

truly value survival. Only immortal can

28:11

hold the kill switch. Now, of course,

28:13

this is making assumptions about the

28:14

nature of AGI and agents. If you look at

28:16

what AGI is, it's a GPU and a model and

28:20

data and those are not one entity. Those

28:22

are just things that happen to coales in

28:24

a data center. So what are you going to

28:26

do? You're going to bomb the data

28:27

center. Like that doesn't make any

28:29

sense. You just delete that particular

28:31

model or you delete that data and you

28:33

start over. Um however, as I mentioned

28:35

earlier, humans are a terminal reservoir

28:38

of liability and um and moral authority.

28:42

So the plumber doesn't need to

28:43

understand the economics. he just needs

28:44

to be the one holding the plug. So this

28:46

could be another example where you have

28:48

a citizen assembly or a citizen jury

28:50

that says, "Okay, we delegated, you

28:52

know, all these resources to this

28:54

particular data center or this AGI. Do

28:57

we kill that AGI? Do we say we're done

28:59

with you, you didn't serve us well. So

29:02

too bad, so sad, so long." So again,

29:04

having a democratic access to a kill

29:06

switch, I think, makes a lot of sense.

29:08

And you might say, well, what if the AGI

29:10

resists that? But remember, we're not

29:11

going to have just one AGI. we're going

29:13

to have billions and billions of AGIS.

29:15

Now, of course, the the risk there is

29:16

what if the AGIS all conspire to, you

29:19

know, say, ah, we're going to, you know,

29:20

the the we're going to band together and

29:23

we're going to unionize and kill the

29:24

humans. All of that's possible, but I

29:26

don't really see that happening. Um, and

29:28

of course, don't need to get into the AI

29:30

safety debate. The evolution of

29:32

hierarchy. So, this slide actually

29:34

probably should have gone in earlier.

29:36

Um, in the past, we had feudal and

29:38

industrial. So, the basis was lineage

29:40

and wealth. the function was resource

29:42

hoarding and resource management really

29:44

um when land was the was the primary

29:46

capital asset control over land was the

29:48

primary thing present day is

29:50

meritocratic so competence and IQ this

29:52

is that strategic competence that we're

29:53

talking about the primary function is

29:55

the management of complexity the world

29:57

is very complex but in another year or

30:00

two AI agents are going to be able to

30:02

manage that complexity for us so the

30:04

future post AGI world is the the basis

30:07

of legitimacy is liability and sacrifice

30:10

Um, and these these words honestly don't

30:14

and you know function of designated

30:15

scapegoat. This is not the model that

30:18

like it's not the right wording because

30:21

a citizen assembly is not about

30:23

liability and sacrifice. It's about

30:25

preference. It's about aesthetics. It's

30:27

about preference. It's about human

30:28

conscience. So we do need to like work

30:32

on the wording. And the function there

30:33

is not a designated scapegoat. it's

30:35

being a I I can understand why terminal

30:38

like reservoir would translate to

30:40

designated scapegoat, but that's not

30:41

what it means. So, just ignore that

30:43

wording. Um, but anyways, moving on to

30:46

the end. The only true luxury is

30:48

responsibility. In a world of infinite

30:50

intelligence, the only scarcity left is

30:52

the willingness to bear the burden of

30:53

consequence. That's what I mean when

30:55

it's a when when you have terminal uh

30:58

when you're a terminal reservoir of

31:00

rights or terminal reservoir of moral

31:02

authority. This is kind of the future

31:04

that I see happening. Now, having gone

31:06

through this, there are a few things

31:07

that I would change, but I think you get

31:09

the idea. All right, thanks for

31:10

watching. Cheers.

Interactive Summary

The video explores the evolving role of elites as society transitions into a post-AGI world. It argues that while current elites are valued for their strategic competence and intelligence arbitrage, AI will soon turn these skills into cheap commodities. Consequently, the future elite will shift from being managers of complexity to 'visionaries' and 'moral crumple zones'—humans who provide the aesthetic preference, values, and terminal liability that machines cannot. The author suggests a governance model based on AGI-augmented citizen assemblies, where randomly selected individuals use superintelligence to weigh trade-offs and guide humanity's collective will.

Suggested questions

4 ready-made prompts