How AGI will DESTROY the ELITES
832 segments
So something that's been on my mind
lately is what is the future role of
elites? Now I asked this on the internet
and the overwhelming majority of people
said well I don't understand what the
point of elites is today. So first
before we say what is the point of
elites in the future we have to
establish what function do they serve
today and the number one thing is
management of complexity. So the best
definition of elite that I have come
across in my research cuz this is
salient to post labor economics is
someone or rather a minority class of
people or a minority group of people
with outsized agenda setting power. Now
that is kind of generic or abstract. So
when we say outsized agenda setting
power, what we mean is a group of people
that have political pull, intellectual
pull, financial pull, something they
have some ability to change what society
does. So a politician is a form of an
elite. A billionaire is a form of elite.
And even people with a lot of cognitive
attention are a form of elite. So Mr.
Beast is an attention elite. Um and then
other people you know other big
influencers like Andrew Tate that is a
form of elitism.
Um so you say what is it that they did
and it was strategic competence is one
of the primary things in this day and
age over history there have been
different kinds of elites. So uh way
back in the day you had the warrior
elite. So someone who was able to
provide marshall force or violence for
the protection of a particular group of
people that's the feudal lords and that
sort of thing and even earlier the
wararchiefs. Uh so that was a form of
elite and then another form of elite in
you know paleolithic times was people
who were more in tune with reality. They
understood the weather. They understood
what you know what plants needed and
what animals needed and those sorts of
things. So moving forward to today, one
of the primarily things that we have is
intelligence arbitrage. So intelligence
arbitrage is basically I'm smarter than
the average bear and therefore people
listen to me and I can make stuff
happen. So Elon Musk is a prime example
of this where he is smarter than
average. He's also very ruthless same as
every other tech billionaire. And
because of the compression of
intelligence and the compression of
strategy, they are able to solve
problems for society. Uh when I say
solve problems for society, the primary
problem that Amazon solved is getting
stuff to you faster. That is how Jeff
Bezos was able to parlay his
intelligence into rent seeeking
behavior. And the rent that he's seeking
is, well, I can get stuff to you faster
and cheaper than anyone else. Therefore,
I'm going to make mad bank on that. Elon
Musk figured out, hey, I can get stuff
to space cheaper, so I'm going to make
mad bank on that and whatever else, you
know. So the idea however is that in a
postagi world highle strategy and
logistics are commodities. Super
intelligence is cheap, abundant and on
demand. Meaning that the same exact
thing that made Jeff Bezos and Mark
Zuckerberg and Elon Musk wealthy is no
longer going to be a differentiating
factor. So we are approaching an
inversion point as the entire internet
is losing its mind over things like
openclaw and there was that article that
someone wrote that you know big things
are happening. What most people don't
realize is that if they became wealthy
by parlaying their intelligence and
compressing strategy or using
competence, just if you had to if you
had to boil it down to one word,
competence, the market value of
competence is going to drop off a cliff
because every single person is soon
going to have their own agentic chief of
staff that is smarter than every other
billionaire and every other PhD
researcher and every other Nobel Prize
winner.
combined. So then the only thing that
matters is accountability and liability.
Um at least in terms of what you're
actually offering the economy. Basically
you become a moral crumple zone where
someone hey you did something bad we
don't like you. There is another
dimension to it which I won't cover in
this video which is the ability to
actually um actually I guess we will
cover it briefly but I call it the
vision, values and reputation dimension.
Um, so anyways, moving on. Breaking the
iron law of oligarchy. Robert Michelle's
iron law states that organization
requires delegation leading inevitably
to oligarchy. Infinite agentic bandwidth
breaks this. We move from representative
democracy low bandwidth to direct
hypernegotiation of infinite bandwidth.
So right now the way that the entire
world works is that there's a whole
bunch of users or citizens or voters or
customers and they all send information
up to representatives. So basically
whenever you're unhappy with open AI or
Twitter or Tesla or whatever
they're one company and you know you
have product managers and those sorts of
things and project managers that are
trying to aggregate all of those
preferences to make a better product or
make better decisions. Same thing for
representatives in Congress. Same thing
for senators in the Senate. And so then
this is the this is the competence
arbitrage of trying to aggregate the
needs and preferences of many many
people sometimes hundreds sometimes
thousands sometimes millions of people
and then put that into one concrete
block of output whether it's a law or a
product or a decision or that sort of
thing. However, what we are moving
towards and this is this is inspired by
what molt book represents. Now, won't
book is of course a very early example,
but if you have dozens or hundreds or
thousands of hyper intelligent agents
advocating on your behalf, debating
everything that you care about and
everything that you need and want to
every single other agent in the world,
then bandwidth no longer becomes a
bottleneck. And this is the this is not
only is it massively parallel, each one
of them is smarter than you. And so the
agentto agent direct micro negotiation
then becomes the default way of making
decisions and getting feedback. When
that happens then the value of elites
today the the aggregators of competence
so strategic whatever whatever
competence looks like whether it's raw
intelligence strategic incompetence
sorry strategic incompetence that's the
government strategic competence um or
technical competence whatever it happens
to be that goes away and so then how do
you make decisions how do you how does
the marketplace work how does democracy
work the entire arbitrage that elites
offer goes away. So we go from managers
to visionaries and this is this is where
my my framework of the vision, values
and reputation comes in although it
didn't make it into the slide deck. So
basically the new elite is the is when
is when you say hey let's go to Mars. So
the preference coalition builders when
the how execution becomes costless or
otherwise equal. Uh and when I say
equal, I mean everyone has the same
ability to execute. Value shifts
entirely to the what the vision and the
why, which is the values. So that's the
vision and values. Power moves from
managers, which are arbitrage
optimizers, to proposers, the people who
are the most inspiring. These are the
preference coalition builders, those who
can get 51% of the population to say,
"Let's go to Mars."
So this is one of the primary things
that Elon Musk either explicitly
understood or intuitively understood
which is why he bought Twitter. So this
is the attention elite. So we're moving
from a more technical elite to where
it's a vibes-based elitism. So the
charismatic elite, influence without
structural control. If you can marshall
enough people to believe in your vision,
then you can make things happen.
Particularly in the post AGI future or
post ASI future. When everything comes
down to hey, intelligence isn't a
bottleneck, human labor isn't a
bottleneck, then it becomes what is the
preference? What is the preference of
humanity? That becomes the most scarce
resources. Where does humanity want its
attention to go? Now, of course, when
you have 8 billion individual agents all
with different uh desires and you're
able to arbitrage that over many many
different projects cuz we have a whole
planet to work on, you can end up with
many many visionaries kind of saying,
"Hey, you know, like I'm over here doing
post labor economics and someone else is
over there doing, you know, cancer
research and we're going to have a
hyperabundance of cognition." So then
it's like I'm going to point it at that
direction. And to be completely
transparent, this is something that I
stumbled into as well because my entire
mandate comes from from you, from my
Patreon supporters, from my Substack
subscribers, from my Twitter
subscribers, because all of you say,
"Dave, I want to empower you because
you're solving problems that I care
about." So I am a very early version of
this is this goes beyond attention
economy. This this is the preference
economy of humanity. This is the vision
economy. So next is the persistence of
gravity. Why flat doesn't work. So you
might think, well, why don't we just do
a flat hierarchy? And the problem is,
and I did some research into this, is
every time people do experiments with
flat hierarchies, you end up with
unspoken hierarchies, which are often
even worse. So flat doesn't work even
without structural elites. And so in
this case, a structural elite is where
the the position is explicit like you
are a billionaire, you are a senator,
you are a governor, whatever it happens
to be. If it if the if the hierarchy is
not legible, you still end up with
hierarchies that are not legible, which
is arguably worst. Network hubs emerge
due to preferential attachment and
scale-free networks. New nodes connect
to existing hubs, creating super nodes
that process the majority of traffic.
This is why something like liquid
democracy just doesn't work. In
simulations and experiments where liquid
democracy, if you're not familiar with
it, is basically direct democracy. And
because there's too many things to vote
on, you delegate parts of of what you
care about, you say, I'm going to
delegate, you know, let's just use an
example. You say, I'm going to delegate
all my votes to Dave on AI policy.
Great. Well, that means I'm going to be
voting on your behalf and I could be
voting for a quarter of a million
people, which is very, very outsized
influence. So then I become an influence
elite. Now, of course, you might say,
well, I take that vote back and I give
it to someone else. But then there's
going to be parts of AI that I don't
fully understand, and I'm going to
delegate my takes on AI safety and
business policy and other things to
other people. And guess what? You end up
with this this trophic layer. So, like
if you remember from like high school
biology, you know, ecology 101, where
you have the trophic layers. So, there's
the there's the the autoroes at the
bottom, which are plants, and then
you've got the herbivores, and then
you've got the predators, and then
you've got the apex predators. Whenever
we try and do experiments and we say,
"We're going to create a completely flat
hierarchy and and we're going to try and
scale that," you still end up with apex
predators, people who aggregate more
preference than anyone else. And this is
why I bring up examples like Elon Musk
and Mr. Beast is because they are
examples of this organic flat hierarchy
where technically everyone is equal on
YouTube and everyone is equal on
Twitter. Well, Elon Musk really modifies
the algorithm to favor himself. So, he's
not he's more equal than others. Um,
it's like uh it's like, you know, the
original Roman emperors. It's like they
weren't the emperors. They didn't call
themselves the emperors. I'm just the
first citizen. I'm the first among
equals. Okay, whatever. It's not how
that actually works. So
scale-free networks in reality always
end up clustering around super nodes. So
that really doesn't work if you want
like you cannot abolish elites. It just
they'll form naturally and that is human
nature. And a lot of people said this in
the comments and this was actually one
of the big reasons that I started
changing my mind is like okay even if
you think that elites shouldn't exist
you cannot get rid of them just based on
human nature. At a certain point in a
future, even if we have a million or
let's just say like we have billions and
billions of robots and we have millions
of super intelligent agents and we can
build Dyson swarms and all of those
things. Some of you out there are still
going to say, you know what, I like that
human's vision. I want to empower that
particular human to do stuff on my
behalf because I feel better knowing
that that guy is in charge. Elites
always form. This is one of the most
interesting things from the research
I've been doing the whale problem. So if
you try and solve this with things like
Dows, so decentralized autonomous
organizations is that you always have a
power law. So perfect equality means
that everyone has equal power, but
every every technological example that
we that we create, you always end up
with people with outsized power. Even if
you have soulbound tokens, it just
doesn't work. So the tyranny of
structuralness in practice voters
succumb to rational ignorance. They
autodelegate to super voters to save
mental energy. Informal elites emerge
who are less accountable than elected
officials because their power is
invisible. Now you might say, well this
is going to be different if we have
super intelligent AI agents who are
voting on your behalf and they don't
delegate you. But you're still
delegating to someone because the world
is too big and too complex. So you're
delegating to an AI agent which should
hypothetically represent your best
interest and your preferences to
everyone else. But the problem still
becomes illegibility and and just trying
to ignore the fact that someone is going
to have outsized agenda setting power by
virtue of being more popular on the
internet. If I were to start a channel
saying, you know what, here's all the
reasons that we should go to Mars or
here's all the reasons that we should
build a moon base, that's going to tip
the conversation in that direction. And
this is why, love him or hate him,
people like Andrew Tate, even though
he's literally a criminal, has agenda
setting power in the manosphere is
because he's got so much attention. And
that is just a fact of human life. And
the thing is is when we try and ignore
or suppress human nature, that usually
makes things much much worse. Look at
how communism has turned out. And when I
say communism, I mean capital C
communism as it was tried in China and
Soviet Russia. So moving on, the trap
the the trap of the benevolent
desperate. The temptation is to let AGI
enact Russo's general will. So, if
you're not familiar, Russo is one of the
prime thinkers of social contract
theory. And his idea was that there is a
general will that it's not it's not the
it's not the preference of the masses.
It's what's good for the most people.
Optimizing for the objective greatest
good, but if AGI optimizes for
well-being without human check, we
become domestic pets. Safe, fed, but
stripped of agency. Now, I did write
something on Twitter saying it's better
to be the pet of AGI than the cattle to
Elon Musk and Jeff Bezos, which I think
people would agree with. You know, a
cattle, you know, is is is uh you know,
there to be exploited, whereas a pet is
something that you take care of and that
you love. So, there's like there if we
if we use the the the zoology model,
there's kind of three models that you
can be cattle, you can be a pet, or you
can be in a zoo.
The ideal model would be that you're in
a zoo. And so a zoo is not necessarily
that you're there on display, what what
happens in a zoo is you recreate the
optimal habitat for humans. But if we
are not our own zookeepers, if the AGI
is the zookeeper, which is what a lot of
people want. And I mean, I wrote an
entire series of novels based on the
idea that what happens if we end up in
this golden cage and the ASI ends up
just being our zookeeper. In the second
novel, the the the is the opening
incident, and the novel isn't out yet,
but I don't mind telling you. The the
inciting event in my second novel is the
ASI self-destructs to destroy the golden
cage. And then what happens when
humanity is suddenly free again?
Basically, imagine the culture series,
but the cultures all self-destruct. Uh,
so, yay, chaos. Um, whoever refuses to
obey the general will shall be forced to
be free. Now what I want to point out is
that a lot of people think that uh Rouso
was not necessarily the best and
brightest when it came to political
theory. Um he was a big romantic but the
idea was that freedom is not necessarily
good because freedom means that you're
not supported by the tribe.
However, you know cuz the reason I'm
unpacking all of this is because you
either have human elites or you delegate
to the machine. Either way, someone is
going to be influencing your life, what
you do and what you don't do. Now, of
course, in the culture series, there are
entire planets that are basically, you
know, anarchist libertarian utopias.
There are other planets that are not.
That's a wonderful thought experiment,
but we've only got one Earth right now.
And so, unless you just want to sit
tight for the next 500 to a,000 years or
however long it takes us to figure out
how to get to other star systems, you're
this is what you got. So, we got to
figure out how to work together on
Earth. The asymmetry of the off switch.
You cannot meaningfully fire or jail an
algorithm. AGI has no skin in the game.
It cannot suffer. If a consensus
algorithm votes to geoengineer the
climate and causes a famine, who goes to
prison? This is one of the primary
insights is that, and this also came up
in the comments, by the way, because a
lot of you are very sharp, is like,
well, why why have one AGI? Why have one
ASI? Why not have a bunch of them? And
so that if one makes a decision, you
say, "Okay, well, we don't like that
one. Delete that one." And and then you
have a billion more different AGI
agents. And by the way, this is probably
how it's gonna how it's going to emerge.
Anyways, there is never going to be one
single, you know, Krypton style, you
know, master computer.
Everything is going to be many, many,
many billions, trillions of agents. And
so then it's like, well, what even is an
agent? Because they're ephemeral. It
basically comes down to you. A human is
the terminal reservoir of liability.
Meaning you are the only persistent
entity. Yes, we might build data centers
and the data centers are persistent, but
the AI models, we're always coming up
with new AI models, so those aren't
persistent. The data changes because you
can you can delete chats, you can delete
agents. So the only thing that becomes
persistent is us is us humans. And
that's why the very beginning of this um
is the terminal reservoir. So you are a
terminal reservoir. You are a terminal
reservoir of accountability, of
liability, of moral authority. That is
one of the primary things to understand.
And actually that becomes a privilege in
the future because then that is what in
entitles you to become an elite. You
become a a a retainer of decision power.
And of course with great power comes
great responsibility. As Uncle Ben said,
the inversion of accountability, not
competence. So instead of asking who is
smart enough to rule, we ask who is
liable when things break. The defining
characteristics of the future elite is
not expertise. AGI has that, but
liability. Elites become the moral
crumple zones of civilization. Here's an
example.
Ireland has had very strict
anti-abortion laws for a long time and
no politician wanted to touch it. So
what did they do? They literally picked
a hundred citizens at random at total
random and they gave them I think it was
like about a year um I don't remember
how long it was but they they basically
said okay here's all the experts here's
you know here's you know you're going to
have multiple meetings you're going to
come to consensus and you're going to
make some recommendations about what to
do about our abortion laws. They
ultimately came up with something to
amend the Irish Constitution and it went
out and it got 66 I think it was 66.1%
of the vote in Ireland. And so this
citizen assembly said, "Okay, cool."
Instead of a government or an AGI, cuz
you know what? What if we did that? What
if we did a citizen assembly for every
hot button issue and then put it to a
popular vote and we just use the AGI or
the ASI to say, "Hey, help us coordinate
this. Help us implement the actual law,
the actual recommendations." Because the
thing is those hundred people were
elites. They were temporary elites. This
is one of the most important innovations
is if you have a hundred ordinary
humans, yes, you've got a plumber,
you've got an electrician, you've got a
carpenter, you've got a stay-at-home
mom, none of them are experts in
constitutional law, none of them are
experts in the medical practice of
abortion. However, if they have access
to super intelligent AGIS, what they
then do is they serve as a proxy for the
aesthetic preference of the rest of
humanity. And so then you say, "Okay,
well those hundred people are liable for
that one decision." And then they use
the AGI and the the ASI to help decide
what should we do and then we put it out
to a popular vote and then everyone
agrees or disagrees. So the old model is
where you have a leader at the top of
the pyramid. So they're at the top of
the competence hierarchy. They're a
politician, they're a president, they're
a CEO, whatever it happens to be.
They're a I mean, heck, even PhD
researchers are a form of intellectual
elite. So instead, you completely invert
the pyramid. So you have a small
randomly selected elite that makes
decisions for the rest of society. And
but they don't it's not it's not
unaccountable. It's they're accountable
to the voters. So uh Switzerland also
does this where they have I think in the
history of modern history of
Switzerland, they've had like 700
referendums. They're addicted to like
participatory democracy. And what I'm
saying is that AGI or ASI or whatever
you want to call it is going to allow us
to all do this for every single issue
that is important. So for instance, if
you say, "Hey, should we, you know,
point NASA at Mars? Should we point NASA
at, you know, at the moon? Should we
have NASA adopt SpaceX technology? What
what should we do?" You do the same
thing. You know, for every public
resource, you say, "Hey, we have we have
all the cognition that we could possibly
need. We have solar, we have fusion, we
have whatever else, we have super
intelligent agents. That's not the
bottleneck. What really is the
bottleneck is the taste and preference
of the human superorganism. So liability
as a service is what Gemini came up with
to call that. I think that's a silly
thing, but you know, it's sticky. You
know, liability as a service, it's
catchy. So the solution, AGI augmented
sort combining a citizen jury, so random
selection with super intelligence. The
plumber doesn't need to understand
macroeconomics. He needs to trust the
simulation and apply human conscience.
So you have the expert, you have AGI
which can simulate all the options and
tradeoffs. By the way, this is something
I haven't touched on yet is that over
time these uh uh all the AGIS and ASIS
are going to be better and better at
simulating things and say, okay, here's
roughly the prior the the probability
that things are going to go well or that
things are going to go bad and here's
all the tradeoffs. So you can think
through it and then the jury. So we
we're already familiar with juries. It's
like you you get a grand jury which I
think is what 24 25 you know um of your
peers. Um then you have a a criminal
jury and so on and so forth. So you have
a jury right where you have people or a
citizen assembly, whatever you want to
call it. Random citizens, real humans
like you and me. They review the values
and ethics and the trade-offs and then
they make a recommendation. So they this
group of people however big it is
whether it's 12 or 100 or a thousand
they make a recommendation and then the
decision goes to population as a whole
as a referendum and that binding vote
then is carried out by the combination
of real humans and AGIs and robots. This
after all of the experiments that I've
done, well, not experiments, but the
research that I've done, this seems like
the most sustainable viable model for
managing elite creation in the future.
The mechanics skin in the game. How do
we prevent apathy? By ensuring that the
transient elites face consequences
through deferred rewards and retroactive
liability. This is not an idea that I
fully agree with because if you if if
people make their best uh effort and
it's in good faith, then you shouldn't
necessarily have a sword of damicles
hanging over them. You want them to make
the best decision they can with the
information that they have. So this is a
debate over deontology which is your
your duty based ethics. So this is um
what is what is the intention uh or what
is what is the heruristics that you're
following versus teological ethics which
is what is the outcome. So this made it
in just because this is something that
people actually talk about. I personally
don't agree with this um but it's worth
discussing. So, let's just say uh in in
this hypothetical future, we have a
six-month citizen assembly where you
serve for 6 months and you are you work
with a group of other people to make
these kinds of decisions in conjunction
with artificial super intelligence. Um
maybe it's on a single issue. Um maybe
because in a post-labor world, you have
a lot of time. So you spend six months
making a decision about healthcare or
abortion or NASA or whatever else you
know oil drilling. Um so you you work on
one decision and then there's a a
liability trail. Now do you reward
citizens for their civic duty if they if
the KPIs are met? So it's like oh hey we
made an economic policy decision you
know then we actually pay you for your
time. Um and then if you fail then you
get reputation destruction. So this is
this would be an idea. this comes up. I
don't remember who came up with the
idea, but it was basically like you need
to be you need to be under threat so
that if you make a bad decision then
like you get, you know, banished from
the realm or something. Again, I don't
agree with this because most politics
does not work this way. You want people
to make the best decision that they can
given the information that they have um
at the time. So, this is very draconian
in my opinion and and is probably not
the direction that you want to go.
However, with that being said, if there
is a high stakes decision like the
decision to go to war, maybe you do
something like this. But again, who's
who's then going to be the arbiter of
the KPIs? Like, did we win the war? And
if you don't win the war, then you get
executed. Like, like the Greeks did that
and and even the British Empire did that
where it's like you put an admiral in a
pos in a in an impossible decision and
if they fail to deliver, you execute
them. It's like that's not really good
policy and people will be too afraid to
serve. So I don't I don't like this. But
there are people that are out that that
out there are people out there who
advocate for deferred rewards and
retroactive liability. Personally, I
think that like you just get paid for
that 6 months that you're working on
this this thing and you render your
judgment. The service that you're
providing to society is you're serving
as a placeholder for human conscience.
the ultimate function, the veto. So this
is the the the ability to say no is one
of the most important things. The one
thing AGI cannot optimize is the right
to be wrong. The human function is to
look at a mathematically perfect AGI
plan and say no, that still violates our
values because AGI cannot die. It cannot
truly value survival. Only immortal can
hold the kill switch. Now, of course,
this is making assumptions about the
nature of AGI and agents. If you look at
what AGI is, it's a GPU and a model and
data and those are not one entity. Those
are just things that happen to coales in
a data center. So what are you going to
do? You're going to bomb the data
center. Like that doesn't make any
sense. You just delete that particular
model or you delete that data and you
start over. Um however, as I mentioned
earlier, humans are a terminal reservoir
of liability and um and moral authority.
So the plumber doesn't need to
understand the economics. he just needs
to be the one holding the plug. So this
could be another example where you have
a citizen assembly or a citizen jury
that says, "Okay, we delegated, you
know, all these resources to this
particular data center or this AGI. Do
we kill that AGI? Do we say we're done
with you, you didn't serve us well. So
too bad, so sad, so long." So again,
having a democratic access to a kill
switch, I think, makes a lot of sense.
And you might say, well, what if the AGI
resists that? But remember, we're not
going to have just one AGI. we're going
to have billions and billions of AGIS.
Now, of course, the the risk there is
what if the AGIS all conspire to, you
know, say, ah, we're going to, you know,
the the we're going to band together and
we're going to unionize and kill the
humans. All of that's possible, but I
don't really see that happening. Um, and
of course, don't need to get into the AI
safety debate. The evolution of
hierarchy. So, this slide actually
probably should have gone in earlier.
Um, in the past, we had feudal and
industrial. So, the basis was lineage
and wealth. the function was resource
hoarding and resource management really
um when land was the was the primary
capital asset control over land was the
primary thing present day is
meritocratic so competence and IQ this
is that strategic competence that we're
talking about the primary function is
the management of complexity the world
is very complex but in another year or
two AI agents are going to be able to
manage that complexity for us so the
future post AGI world is the the basis
of legitimacy is liability and sacrifice
Um, and these these words honestly don't
and you know function of designated
scapegoat. This is not the model that
like it's not the right wording because
a citizen assembly is not about
liability and sacrifice. It's about
preference. It's about aesthetics. It's
about preference. It's about human
conscience. So we do need to like work
on the wording. And the function there
is not a designated scapegoat. it's
being a I I can understand why terminal
like reservoir would translate to
designated scapegoat, but that's not
what it means. So, just ignore that
wording. Um, but anyways, moving on to
the end. The only true luxury is
responsibility. In a world of infinite
intelligence, the only scarcity left is
the willingness to bear the burden of
consequence. That's what I mean when
it's a when when you have terminal uh
when you're a terminal reservoir of
rights or terminal reservoir of moral
authority. This is kind of the future
that I see happening. Now, having gone
through this, there are a few things
that I would change, but I think you get
the idea. All right, thanks for
watching. Cheers.
Ask follow-up questions or revisit key timestamps.
The video explores the evolving role of elites as society transitions into a post-AGI world. It argues that while current elites are valued for their strategic competence and intelligence arbitrage, AI will soon turn these skills into cheap commodities. Consequently, the future elite will shift from being managers of complexity to 'visionaries' and 'moral crumple zones'—humans who provide the aesthetic preference, values, and terminal liability that machines cannot. The author suggests a governance model based on AGI-augmented citizen assemblies, where randomly selected individuals use superintelligence to weigh trade-offs and guide humanity's collective will.
Videos recently processed by our community