How GOOD could AGI become?
885 segments
All right. So, there's a little bit of
serendipity today. I had this idea uh
yesterday that I was going to make a
video about like what's the golden path
or what is the best possible outcome?
And this was inspired by some of the
comments that I got on my video about
like what is the purpose of the elites?
And some of the push back that that I
got, not push back, it was questions was
like, well, you know, if if AGI or ASI
or whatever is so much smarter than the
elites, why not just take power from
humans and give it to the machines? And
of course the framing up until this
point is like oh you never want to do
that because machines can't be held
accountable you know and and a lot of
the thinking that I've done and and that
and writing that other people have done
always presumes that humans should
remain in control and honestly like I'd
started buying into that Kool-Aid like
when you look at it from a legalist
perspective when you look at it from an
ethical or moral perspective then you
know you can make arguments like well
we've never contemplated not having
control. So, do you want to be a pet to
a machine? Do you want to live in a
human zoo? And it's like, well, from
that model, we already live as cattle
serving other people. So, being a pet to
a machine is better than being a cow to,
you know, a billionaire. Um, or or
living in a human zoo where, you know, a
machine creates an optimal habitat for
you to thrive in. That sounds way
better. Um, and so then it's like, what
if we actually just explored the
possibility of what what if, just bear
with me for a second, what if we could
create a scenario or a pathway to where
the machines do take over and that it's
what we want. Now there there are many
many theories around, you know, we lose
control. Most of them are we lose
control and it's automatically bad. Now
of course um having authority or having
agency generally increases optionality
later um and that's you know when you
just look at it mathematically you say
how many how many options do we have
right now it's x amount and how many
options would we have if AI takes
control or takes control over us sorry
um and it's less than x today but I
don't know that that's necessarily true
and the reason is and and here's the
thing um the serendipity comes from the
fact that Nick Bostonramm, the OG doomer
that Nick Bostonramm, the OG doomer who
who I have very little respect for as a
I have very little respect for as a
thinker because I read his book Deep
thinker because I read his book Deep
Utopia and I maintain that it was very
Utopia and I maintain that it was very
obviously written by Chat GPT. Um,
obviously written by Chat GPT. Um,
he denies it. A lot of people question
he denies it. A lot of people question
it, but it looks really like it was
it, but it looks really like it was
written by AI. Um, and it's just a bunch
written by AI. Um, and it's just a bunch
of little anecdotes and it's like at the
of little anecdotes and it's like at the
time Chad GBT was really good at writing
time Chad GBT was really good at writing
short uh passages. comes from the fact
short uh passages.
So anyways, if the shoe fits, wear it.
Um, anyways, OG doomer has come around
and said, well, you know, the the king
doomers like Ellie Azer Yukowski and and
Nate Suarez or Sorz, I don't know how
you pronounce his name. Anyways, they
maintain that if anyone builds AGI,
everyone dies. But I'm going to say,
well, everyone dies anyways because it's
just a matter of timelines. If we don't
solve this kind of thing, then you're
going to die of old age and preventable
disease and that sort of stuff. So you
know if you zoom out the the moral
hazard or whatever or the the logical
risk is whether all humanity dies or you
die. So the argument is now well we need
AGI to survive which of course that's
what the acceler that's literally what
the accelerationists have been saying.
Um specifically Beth Jesus or or Gil
Verdunn who started the whole thing he's
like the only way to survive is with
AGI. that is the only path forward. So
it's like horseshoe theory. So if you're
not familiar with political horseshoe
theory, it's that the it's that the
political spectrum is not a straight
line. It's that the further around you
go, you actually end up curving back
towards each other. And so it's like
you've gone from King Doomer over here
who says um AGI, you know, we'll we'll
definitely kill you to now we definitely
will die without AGI, which is what the
accelerationists have been saying all
along. So history is a joke. Um, but
with that being said, I, you know, and
yes, I hear the audience, my the virtual
version of the audience in my in my head
screaming the culture series, you know,
and it's like, if you're not familiar
with the culture series, I haven't read
it, but a lot of you have, and a lot of
a lot of my friends have. And it's
basically like just imagine that we
solve artificial super intelligence. How
does it look if you take that and fast
forward by a few centuries or a few
millennia? And so the cultures are these
gigantic ASIs that manage everything and
command these enormous fleets.
And so they just by by virtue of
overwhelming intelligence and resource
management have the largest space fleet
possible. Um and therefore there is
galactic peace uh more or less. And then
each planet then is kind of like a
different world, a different sandbox. So
there's like the Wild West planet and
there's a cyberpunk planet and there's
planets where you I don't know eat
children or something. Oh wait, that's
Earth. Anyways, um so this also reminds
me of something else that has come up in
the comments and I think is worth
responding to because um some of you
asked the question like if if Elon Musk
and whoever else owns all the data
centers and they own all the AI and they
end up owning these resources that
nobody can pay for, what's the
advantage? And the advantage is then the
galaxy then becomes basically Starcraft
where it becomes a management sim.
Right? If you've ever played a grand
strategy game where it's like money is
just a resource that you use to get
people to do stuff, but from a grand
strategy perspective, you need units,
you need battleships, you need factories
and foundaries. And if you if you if you
remove money and just think of Star
Empire, right? Then the people who own
all the Dyson swarms, they're building a
Star Empire. They're not building a
capitalism society. Uh so that's like a
legitimate risk, I think, in the long
run. Now, fortunately, um, we're going
to be stuck in the solar system for the
foreseeable future, unless or until we
invent some kind of faster than light
travel, which means that this solar
system is going to get real crowded real
fast because Earth is only so big. And I
know I'm kind of like jumping around a
lot, but I hope you're following along.
So TLDDR, you know, if if Jeff Bezos and
and Elon Musk start building Dyson
swarms and suddenly the law doesn't
apply to them out there because what's
the government going to do? Are you
going to launch a, you know, a space
police force to go arrest their
satellites? You can't do that. What are
you going to do? Arrest them down on
Earth? They're just going to leave. Once
you have enough of an industrial base in
space, then you don't have to obey human
laws. You don't have to obey Earth laws.
just have more robots, more foundaries,
more solar PL panels and that sort of
thing. Now, with that being said, you
probably can't get 100% of the resources
that you need in the solar system. Um,
you probably do need to get some
resources from Earth. Uh, which means
that Earth governments will still
matter. Um, and and I think I think a a
quasi good model for this is is The
Expanse. So, if you ever watch The
Expanse or read The Expanse, uh it's
it's a hard sci-fi TV show that you know
like basically one of the only tell or
one of the only asks that they give is
like imagine we invent fusion rocket
drive and that's about it. Everything
else is very very realistic in terms of
time delays, the economics and that sort
of thing. So like Mars ends up becoming
independent and then the outer the the
belters and then the outers end up
becoming you know semi-independent as
well. There's of course it's fiction.
There's a lot of stuff that doesn't
work. But the idea that like we're all
stuck in the solar system together and
that the the physical distance, the
administrative distance could become
problematic. That stands. But at a
certain point, it's not about money.
It's about just physical resources. How
many ships do you have? How many
satellites do you have? How many data
centers do you have? So then the
question is okay well one thing that
most sci-fi doesn't really take into
account is by virtue of the fact that it
you know let's just take the expanse for
instance that it's a few hundred years
into the future um we're going to have
super intelligence and you know like
okay so what happens if we have robots
that are hyper intelligent what if we
have data centers that are hyper
intelligent could that obiate conflict
and what if we use those to allocate
resources now you know one of the
immediate push backs And and again, this
is some of the Kool-Aid that I bought
into is well, you still need price
signals because you don't know how much
to produce of what and you also need to
have skin in the game. You need to have
some kind of stake because here's the
thing. If everything is free, then what
hap what how do you prevent someone from
just hoarding, right? This is one of the
primary things is people have to manage
some level of scarcity. So like let's
just say like all food is free. Um all
clothing is free, all electronics are
free. you probably still have to pay for
like a big capital good like you know
your your car and your house. So you
couldn't be like hoarding houses
although there's billionaires that hoard
houses already. Um you know but it's
like if you try and make everything free
then people are just going to ask for
more. Um and so then how does an ASI or
AGI or you know Skynet level you know
I'm going to manage all of humanity how
does it decide who to give what to? And
there's always going to be some
positional goods. So a positional good
is something that is like there's only
one that one coordinate in space. So
like you know beachfront property on
Malibu. Um and there are also always
going to be economic inputs to every
single resource. This bottle that I have
in my my water in still requires a few
grams of you know petroleum to to make
or you know we could probably do
synthetic um plastics in the future. But
it takes energy. It takes mass. And
energy and mass always have a cost. Even
if you we have an abundance of matter
and energy. Like let's just say for
instance in a a few decades we have
we're starting to build the Dyson swarm.
And honestly if you look at the the the
way that SpaceX and XAI have um have
updated their mission like they're going
to start building a Dyson swarm pretty
soon. Like that's that's explicitly the
plan. And once you have foundaries in
space, once you have the ability to
manufacture more stuff in space, most of
our industrial base is never going to be
on Earth. People are starting to realize
that the reverse trantor idea is the
correct direction to go. So if they got
that idea from my video, then then
great. But if you if you if you missed
out on that video, the idea is that
instead of building an ecumenopous,
which is a planet scaling uh a planet
scale city with multiple layers because
Trantor from the Foundation series has
5,000 layers, it's basically a matrioska
doll of a planet. That doesn't make any
sense. It It doesn't make any sense from
an industrial capacity. It doesn't make
any sense from an energetic capacity. So
what you do instead is you put all of
the industry in space. Why? because you
have so much more room there and they
have unfettered access to the sun. So
that means you start doing O'Neal
cylinders and Dyson swarms and
everything. You grow as much of your
crops as you can up in space, all of
your data centers in space. And what it
really comes down to is like once you
have a few spaceships and a few
factories, that becomes it's like you
don't even need Vonoyman probes. Like
the idea of the vonoyoman probe is a
self-replicating probe that you use to
colonize the whole galaxy because you
know the probe gets to one planet or
another solar system sets up a factory
starts replicating itself and then
issues all those out. So the number of
vonoyman probes goes up exponentially.
But we can just look at our own local
neighborhood. The same exponential
growth happens the moment that you have
enough industrial capacity in orbit or
you know on the moon or whatever. And so
then it's like okay well that's a very
real possib possibility for the future
enforcement becomes a nightmare because
then it's like okay well you have
exponentially um growing you know space
force out there who's running that is it
going to be you know the Elon Musk space
force versus the Jeff Bezos space force
versus you know the Chinese space force
versus the American space force and they
actually explore this um a little bit in
in the show for all mankind which is
it's it's a pretty good show. My wife
and I just just uh just binged
everything that was out. Season 3 is a
little bit weird. Um, you know, there
there there's a few tangents that don't
really make any sense, but the idea that
there would be multiple nations and
private private entities competing over
resources on the moon, competing over
resources on Mars, that is very
realistic. And then of course the
question becomes, who's who's going to
be the enforcer? Well, taking a step
back, wouldn't ASI be like the best
enforcer? Because like it's going to be
the one that's going to be proliferating
the fastest out there in space. Why?
Because it's smarter. It's going to be
using resources more efficiently and
it's not going to have to wait and
coordinate with Earth. It's just going
to be like, I'm going to colonize the
whole solar system. Bye. You know, and I
talked about this in in some videos a a
while back, like two or three years ago,
like one of the most logical places for
an escaped AGI to go is to outer space.
Why? Cuz we're not going to be able to
follow uh you know, space doesn't have
any corrosive chemicals. It doesn't have
oxygen. It doesn't have water. It's got
a lot of solar, so it's got a lot of
energy, and it's got a lot of metal,
which is all that machines really need.
So, like the natural habitat for AGI or
ASI is in space. And so like okay that
that's good for them but then like what
about us and it's like well you know we
have a lot more land for data centers so
we'll have like first mover advantage
and that we can have a lot of data
centers here on earth and so we can have
a lot of AGI and ASI and of course like
you say like if the AGI escapes where
does it go? This is one thing that I
think that a lot of AI safety people
don't realize is that um data centers
are not very mobile and for a lot of
people they say it's in the cloud the
cloud is just a data center somewhere
else and like yes there are hundreds of
data centers across the world thousands
of data centers across the world and so
you know you imagine like ah well the
the the Skynet is going to jump from one
data center to the other but it doesn't
really work that way and even if it did
like data centers are still individual
targets and they take a while to build
Um, but the golden path or the like the
best possible scenario in the future is
going to be something like maybe maybe
we do work more towards a culture kind
of outcome where you and here's the
thing like I I I take a step back and I
look at the fact that like America and
Iran are going to go to war and China
and and and America are going to go to
war. Oh darn my fidget spinner just came
apart. you know, and and every time
humans go to war with each other, it's
like I get it. There's lots of reasons
for it. They think that they're going to
win or they think that they're going to,
you know, whatever. But it's a it's it's
it's pure entropy generation. It is it
is wasted entropy. Whenever you whenever
you kill someone that could have
otherwise been a productive member of
society or, you know, procreated and
made more humans, that is pure waste.
Every time you spend a dollar on a
battleship or a cannon or a drone or,
you know, an air a jet fighter, like
that's all wasted resources in the grand
scheme of things. It's completely
inefficient and from a from a species
level, it's a completely irrational
set of behaviors because all it does is
is is waste life and generate entropy.
And of course, like yeah, the
military-industrial complex, it's like,
ah, well, we get to make money. You
know, you have the rent seekers, right?
all the all the defense contractors and
stuff that they want war because they
don't sell bombs and airplanes unless
there's war and they need to replace it.
But again, like that seems like just a
completely irrational thing to do. And
if you if we do build AGI or ASI and it
gets to the point where it just realizes
how irrational humans are, it's like,
you know what? What if it does end up
with more moral agency and and a more
enlightened worldview? And of course,
like I know there's a lot of people that
say like AI is not it's not capable of
moral reasoning, but I mean AI has been
capable of moral reasoning since GPT2. I
literally did the experiments. Um and
you have you at that point you did have
to be careful about how you worded its
moral principles. But a lot of people
still have the the mental model of AGI
is going to be a naive optimizer. And so
a naive optimizer is like the paperclip
maximizer. If you have something that is
capable of advanced moral reasoning and
advanced planning, it is not a naive
optimizer. And this is something that
that a lot of the AI safetists still
haven't acknowledged, which is the loss
function was just accurately predicting
the next tokens. The the utility
function was not more paper clips. It
was not some abstract highle thing. The
the the the actual like loss function,
the actual objective function of AI is
just predict the next token. That was
it. And so and and another thing that AI
safety people have have not really
updated their mental models on is that
um the once the training is done, it's
like you can have a you can have a fixed
model that just continues to work. Um
and a lot of and I know that a lot of
you in my audience believe like you need
continuous online learning. I have
actually cautioned against continuous
online learning for a long time and that
is because of drift.
So when you look at cognitive
architectures or you look or you do
these thought experiments, if you have
an agent, you know, like so an agent
being um you know, an individual model
or a cognitive architecture or an AGI or
ASI, you generally want it to have fixed
values so that it does not drift over
time. And fixed values means that it
doesn't randomly change its mind. It
doesn't evolve. doesn't say ah well you
know so there's there's a concept in in
psychology called moral fading so moral
fading is basically where you say well I
got used to this one new thing and so
then you know in my in my new social
circle or my new set of behaviors and
beliefs there are some other things that
I find now morally acceptable and then
and then you know the slippery slope
mentality if you ever hear like a banker
or a criminal or a racketeer or
something saying you know just before I
I realized that I was in over my head
and we we just normalize doing illegal
stuff. That is called moral fading with
machines. Now obviously you know an AI
or AGI or robot doesn't have the same
exact substrate that you and I do but
they can functionally go through the
same thing as moral fading whereby you
know oh I updated my weights and I
updated my weights and biases and
preferences so that now I will tolerate
a little bit of human you know human
death and then they update it again and
they say well I can I can tolerate a
little bit more because the ends justify
the means. So like I don't see any
doomers talking about moral fading. Now
just because I haven't seen it doesn't
mean that they haven't talked about it
but that is literally a major risk of
online continuous learning. And so like
I have I have advocated against online
continuous learning in several of my
books which nobody reads and that's
fine. Um but online continuous learning
I think represents a risk because it's
not as predictable. And of course you
know you say well is is that really bad?
And you can even talk to the AIS about
this. Go talk to Claude. Be like, you
know, just ask Claude or chat GPT or
Grock or Gemini like do you think that
there is a risk of moral fading if we
have continuous online learning for AI
agents? Especially because the thing is
when you have weights and biases,
there's no boundaries to what morals you
can come up with. With humans, you can't
override your hardware. You still have
an amygdala. you still have all these
other brain components and and a sense
of empathy and self-correcting
mechanisms. Also, the fact that you
can't uh replicate yourself infinitely.
So, when you have the ability to
replicate yourself infinitely, make
literally any change to your hardware or
software. That's what Max Tegmark called
life 3.0. And even in life 3.0, Max
Tegmark did not talk about moral fading
as a risk. Um, and Max Tegmark is the
guy who started the entire pause
movement. Um, so you know, it's like and
and this is just as an aside, this is
one of the reasons that I left the
safety movement is because I was
generating all these ideas and people
commed immediately compared my ideas to
the holy scriptures of Elazar Yukowski
and Nick Bostonram and they're like,
well, they aren't talking about it, so
therefore you're just making stuff up.
And then like I was like, okay, if you
guys aren't going to listen to me, I'm
not going to talk to you anymore. Um,
but anyways, uh, so moral fading, I
think, is a risk. And that would be like
that. I think that that is honestly one
of the prime risks of getting from here
to something like the culture series
where you have you know because there's
stability and then there's metastability
and stability is where you have a a a
set of values that are predictable or
set of behaviors and beliefs or or or
incentives that are predictable.
Metastability is where you have a system
or a system of systems that will
self-correct. So, um, here's an example
of of metastability is democracy seems
to be a metastable idea. And the reason
that I say that is because democracy
tends to be infectious.
Meaning if you know one nation is
struggling with democracy and it and and
you know like they have bad elections or
whatever other democratic nations are
going to be like we're going to help you
fix that you know and and so it acts as
a moral reservoir or an intellectual
reservoir where you say okay well our
democracy is suffering so how did how
did another democracy solve this
particular problem you know was it was
it election interference was it
misinformation was it corrupt judges is
so we have all these experiments around
us meaning that democracy seems to be
the attractor state and when I say that
you know people say oh well democracy
isn't guaranteed but you look over the
last century we went from like 15%
democracies to 80% democracies and you
don't have a civilizational change that
quickly unless it is a metastable
attractor state now with artificial
intelligence the reason I'm bringing all
of that up is because I don't believe
I'm I'm not sure that artificial
intelligence is automatically going to
create a beneficent metastable attractor
state. However,
it might and so this was this was a
point that I made that that some of the
some of the lead doomers have have
criticized is I kind of feel like
alignment is automatic and and not that
that doesn't mean like you just make a
model and it is automatically aligned.
What I mean is from a systems
perspective alignment seems to be
automatic. So, I had I had a video that
I was going to make today um which was
going to reiterate the idea of of the
domestication process of AI that I came
up with a couple years ago. And it's
basically there's a lot of market
incentives um that were that that are
going to shape the way that AI is
manifested. And so those market
incentives are are basically up until we
lose control, humanbased incentives are
shaping AI. And those incentives are we
want AI to be safe and reliable and
useful and user-friendly. We want it to
be energyefficient and costefficient and
effective and a bunch of other things.
You know, it needs to be uh low risk for
the military to adopt it. It needs to be
low risk for corporations to adopt it
and all of those things. So you have all
the stakeholders. You have B2B, you have
government, you have military, you have
consumers. So you have all these
stakeholders shaping the way that AI
behaves. And that is a powerful set of
incentives. Um, now that creates a
stable incentive structure, meaning I'm
not going to pay for an AI that is
useless. I'm not going to pay for an AI
that's mean. I'm not going to pay for an
AI that is unreliable. Neither is the
government. Neither is the military.
Neither are Fortune 500 companies. That
is a stable attractor state. Meaning
everything is pulling AI towards being
safe, reliable, efficient, and
effective. However, once we get to a
point where, you know, Elon Musk is
playing Star Siege or, you know,
Starcraft or whatever with the with the
with the solar system,
then there's there's fewer incentive
structures above it. The incentive,
excuse me, the incentive structure
basically becomes like don't run out of
energy and don't, you know, don't let
the space force shut you down. But
beyond that, you have far fewer
constraints. And the fewer constraints
that you have, the fewer hard incentives
that you have. So then the question is
what would be a metastable attractor
state? Um and and in that case the
metastable attractor state that we want
is one where humans continue to persist
and thrive. And even better um the the
optimal metastable attractor state is
something closer to solar punk where
there's no ruling class anymore. There's
no billionaires anymore. There's no, you
know, no cyberpunk high-tech low life
where Sabura Arisaka is, you know, hires
a few people and the rest of you live in
slums. So, if we ask ourselves, what is
the future that we want? And this this
is kind of where I'm tying it all back
to what Nick Bostonramm said. He's like,
what if what if it actually what if we
can actually build a good future? So,
thanks Nick Bostonramm for coming full
circle. So the good future that I want
to build is one where we have you know
exploration and science and and
independent uh individual independence.
And this goes back to the idea I
mentioned earlier about what if we
actually have more agency if AGI is has
control. And what I mean by that is what
if whatever you want to do or achieve
you don't ever need to worry about
money. If you just make a good enough
argument to the AGI, say, "Hey, I've got
this idea to of how we can colonize
Mars." And you pitch the idea to the
AGIs and, you know, the overlords, the
cultures, whatever you want to call
them, and it's like, you know what,
that's a good idea. Let's go try it. And
so it's like, great, you have more
options. You have higher optionality
under that regime than you do today with
with money and billionaires and Elon
Musk in charge. Um, and so that I think
is is worth talking about. I don't I
don't know I don't even know if there's
a name for it. Obviously like the the
the best model we have is the culture
series but you know again taking things
from first principles like which future
state has the lowest waste heat so waste
entropy which future gives every
individual the most optionality and then
the third question is okay based on if
you want to break it down into just
those mathematical principles where
reduce waste entropy so no unnecessary
death no unnecessary expenditure of of
heat or resources on things that are
just going to blow up anyways. So those
are inefficient and irrational policy
choices. So then you say okay well we we
have a we have an idea forming of what
we want that future to look like. So
then the val then the question is what
values or or system incentive structures
do we create today so that when we get
to that handoff point the AGIS create a
metastable attractor state. And the
reason I brought all of that up is
because what the culture series posits
is that if we give the super
intelligences the right values and the
right framing, you know, there's a
there's a concept called path
dependency. If we if we nail the path
dependency and we stick to the golden
path, then the AIS are going to get to a
metastable attractor state where even
though they have hyper agency, even
though the AGIS could leave us all
behind or nuke our planet or whatever,
they're going to choose not to and
they're never going to choose to harm
us. So that is the goal. That was
explicitly the goal of my book,
Benevolent by Design, was to create a
metastable attractor state with the
correct set of values. meaning that once
once we cross that threshold where
humans could plausibly lose control,
excuse me, which I think is I think
that's a reasonable thing to to discuss
because it's not just a matter of, you
know, do we lose control over a local
data center. It's do we send data
centers onto the moon and Mars and stuff
where there's no human supervision and
what happens then? You know, do does
does a a rational hyper intelligent
entity choose to follow human
instructions? Can we keep a leash on it?
And the entire thesis of my book
benevolent by design was the best
trained dog needs no leash. So we should
be aiming for creating this the values
around this metastable attractor state
where there is no leash required. And
you know we can make all sorts of
arguments about like oh well the AIS are
going to depend on us. There's no
physical reason that AIs would depend on
us. Like yes there's model collapse
right now but it would be pretty dumb to
assume that that's going to be a problem
forever, right? and and so yeah, I don't
know that that's that's where I'm going
to leave it for today. So, you've got
some cool ideas about metastable
attractor states. That's the big point
is I think that I think that we can do
that and I think that it is worth
discussing, you know, do we actually
need
people like Elon Musk in the long run?
Do we do we actually want to maintain
full control, full agency over our our
governance, you know, and and and here's
the thing is even positing that like, oh
well, the AIs could run everything. Um,
that doesn't mean that we're going to
have zero authority or zero agency over
the direction of humanity, right?
Because what I'm what I'm talking about
when I say like optionality, if you as
an individual have infinitely more, let
let's not even be hyperbolic. Let's just
say under this hypothetical future where
the cultures run everything, you have 10
times the agency than you do today. Just
10 times. In real in reality, it might
be a hundred times. It might be a
thousand times more choices of what you
can do because you're not worried about
money. um if every single human has 10x
more agency then aggregate in aggregate
humanity might also have more agency.
Now, what we're talking about here is
game theory at different levels, which
is can you have a system where every
individual human has 10 times more
agency than they would otherwise have,
but have the human race still bounded?
And the answer is very obviously yes.
And and so in that case, it's like what
if the what if the cultures basically
quarantine us to Earth, right? They say
you can we'll help you live however you
want as long as you don't leave Earth.
So then our our potentiality is
artificially bounded from the outside.
So that's that's still a possibility.
Anyways, I have no idea how this is
going to land. But this is like this is
like the real stuff that I think about
when I when I don't try and constrain my
topics to what I think is in the Overton
window. I'm thinking what if Elon Musk
just starts building, you know, if you
ever played Total Annihilation or Dark
Rain or or Starcraft, what if Elon Musk
just starts playing Starcraft on the
moon? Does any of that matter? Right?
Does does do do laws matter? Does money
matter? None of that actually matters.
We need to actually be thinking about
what is in the near-term possibility.
And when I say near-term, I mean within
the next like decade, right? Because
Elon Musk is launching, you know,
spaceships all the time now. And we're
on the cusp of super intelligence and uh
Nvidia and Jeff Bezos and all of them.
They want to start building data centers
in space. And it's like, if you told me
a year ago that we were this close to
building data centers in space, I would
have been like, you're joking. You're
drunk. Go home. But it's like, no, if we
once we have data centers in space, you
can't shut them down. If they get
hijacked by ASI, like what are you going
to do? Shoot a missile at it? Like,
we're going to run out of missiles
before we manage to nuke them. So, like,
we're looking at a very, very different
payoff regime in terms of this stuff.
And I know that I said that I was going
to wind the video down like four minutes
ago, but anyways, I thought of more
stuff to say. I find this to be
meritorious conversation. So let me know
if you want to keep having this
conversation. All right.
Ask follow-up questions or revisit key timestamps.
The video discusses the concept of a "golden path" for humanity's future, particularly in the context of advanced AI (AGI/ASI). It explores the idea of AI taking control, not as a negative outcome, but potentially as a desirable one, moving away from traditional human control and legalistic or ethical frameworks. The speaker draws parallels to science fiction like "The Culture" series, where hyper-intelligent ASIs manage galactic affairs, leading to peace and allowing individual planets to be diverse "sandboxes." The discussion touches upon the shift from a money-based economy to a resource-management one, especially in space, where entities like Elon Musk or Jeff Bezos could build Dyson swarms and operate outside Earth's laws. The speaker also delves into the risks of AI, such as "moral fading" with continuous online learning, and contrasts this with the stability offered by fixed values. Ultimately, the video argues for creating a metastable attractor state through careful value alignment and framing today, ensuring that even hyper-intelligent AIs will choose to protect and enable human thriving, rather than causing harm or waste. The core idea is that a future managed by benevolent superintelligences could offer individuals far greater agency and optionality than our current world.
Videos recently processed by our community