HomeVideos

Elon Musk: A Different Conversation w/ Nikhil Kamath | Full Episode | People by WTF Ep. 16

Now Playing

Elon Musk: A Different Conversation w/ Nikhil Kamath | Full Episode | People by WTF Ep. 16

Transcript

1876 segments

0:44

Our audience is largely wannabe entrepreneurs in India.

0:49

And I feel like all of us have so much to learn from you

0:53

because you've done it so many times over in so many different domains.

0:56

So, we will speak to them today,

0:58

and I will try and centre all my questions in that direction

1:03

so they can take advantage of this conversation

1:05

and maybe start-- take a chance and build something.

1:25

Do you want a coffee?

1:27

Um...

1:28

-Sure, why not? -Okay.

1:30

Are we gonna be talking for a while?

1:32

-[laughs] -I hope we are.

1:33

Okay, good. Sure.

1:36

-Um... -Meghana?

1:38

May I trouble you for a coffee?

1:39

Can we get another coffee?

1:41

-[Meghana] Anything's fine? -Uh...

1:43

Cappuccino, I guess.

1:44

[Meghana] Cappuccino? Okay.

1:45

Are you a coffee drinker, Elon?

1:47

-Yeah, yeah. -Yeah?

1:48

Yeah, I have a coffee once,

1:49

-usually in the mornings. -Okay.

1:51

-One-a-day kind of thing? -Yeah, pretty much.

1:58

You want to wait for it?

1:59

No, I'm good.

2:04

[laughter]

2:07

The first thing I must say

2:09

is you're a lot bigger and bulkier, muscular,

2:13

than I would have thought you are.

2:15

Oh, stop, you're making me blush.

2:19

Really. Seriously.

2:22

Yeah, I mean, look, on the Internet, I'm small, you know?

2:28

You're essentially...

2:31

What percentage of Internet is spent on Twitter?

2:35

Is there a number to it? On X.

2:37

Well, so, we have, like, about 600 million monthly users.

2:43

Although, it can spike up, if there's some major event in the world.

2:47

It can get up to, I don't know, 800 million or a billion,

2:52

if there's some major event in the world, so...

2:58

I don't know, 250 to 300 million per week type of thing.

3:03

It's a pretty decent number.

3:04

It tends to be readers, you know, people that read words.

3:11

You know, so...

3:12

Do you think that'll change?

3:14

Yeah, I mean, there's...

3:17

There's certainly a lot of video on the X system.

3:21

But at this point... Increasing amounts of video.

3:24

But I think where the X network is strongest

3:31

is among people who think a lot and read a lot.

3:36

You know, so, that's where it's gonna be strongest.

3:39

Because we have words.

3:41

And, you know, so...

3:45

Among readers, writers, and thinkers, I think X is number one in the world.

3:52

As far as social media goes, the form factor,

3:56

if you had to wager a guess for tomorrow...

3:59

-Yeah. -...how much is text, how much is video?

4:02

I've heard you speak about maybe voice and hearing

4:06

being the next form of communication with AI.

4:09

What happens to X in its true form? How does it evolve?

4:15

Yeah, so, I do think most interaction is gonna be video in the future.

4:20

Most interactions are gonna be real-time video with AI.

4:23

So, real-time video comprehension, real-time video generation,

4:28

that's gonna be most of the load.

4:29

And that's how it is for most of the Internet right now.

4:33

Most of the Internet is video.

4:34

Text is a pretty small percentage.

4:37

But the text tends to be higher value, generally.

4:41

Or more, it's more densely compressed information.

4:48

Yeah, so...

4:50

But if you say what is the most amount of bits generated and compute spent,

4:55

it's certainly gonna be video.

4:58

So, I used to be a shareholder of X,

4:59

-a very small one. -Okay.

5:01

And I got paid when you bought Twitter and you made it X.

5:06

-Happy decision? Glad you did it? -Yeah, yeah, I think it was important.

5:11

You know, I felt like Twitter was heading in...

5:15

or had gone in a direction

5:17

that had more of a negative influence on the world.

5:21

It was...

5:23

I mean, of course, this depends on one's perspective.

5:25

Some people will say, well, actually,

5:26

they liked the way it was, and now, they don't like it.

5:30

But I think the fundamental thing was that...

5:36

Twitter was amplifying...

5:39

I would say, a fairly pretty far left by most people's standards

5:42

in the world's ideology

5:43

because of where it was based, in San Francisco.

5:46

And then they actually suspended a lot of people on the right.

5:53

So, from their perspective,

5:55

even someone in the centre would be far right.

5:58

If you're far left, anyone in the centre is far right, because...

6:02

It's just on the political spectrum,

6:05

they're just as far left as you get in the United States and in San Francisco.

6:09

So, what I've tried to do is just restore it to be balanced and centrist.

6:13

So, there haven't been any left-wing voices

6:16

that have been suspended or banned or de-amplified or anything like that.

6:23

Now, some of them have chosen to just go somewhere else, but...

6:28

But at this point, the operating principle of the X system

6:34

is to adhere to any country's laws,

6:38

but not to put our thumb on the scale beyond the laws of a country.

6:44

When I think of social media...

6:47

-Oh, thank you. -Thank you.

6:49

When I think of social media, Elon,

6:52

I feel like... even data suggests that the current incumbents

6:57

seem to be losing traction amongst the youngest of audience.

7:01

Yeah.

7:02

Even platforms like Instagram...

7:04

I mean, they're not exactly like Twitter, but platforms across the board.

7:08

If one had to rework social media and build something bottom-up,

7:13

what do you think would work for the world of tomorrow?

7:18

Well, I mean, I don't think that much about...

7:24

about social media, to be frank. I mean, it's...

7:26

I mostly just wanna have something where there's...

7:32

in the case of X, kind of a global town square,

7:35

where people can say what they wanna say with words, pictures, video...

7:43

where there's a secure messaging system.

7:45

We've recently added the ability to do audio and video calls.

7:50

So, really trying to bring the world together

7:55

into a eclectic consciousness.

8:02

That's, I guess, different from just saying, like,

8:06

"What is the most dopamine-generating video stream that one could make?"

8:12

Which, you know,

8:15

I think it can be a little bit of brain rot, frankly.

8:19

You know, if you're just watching videos

8:20

that just cause dopamine hits one after another, but lack substance,

8:25

then I think those are not great. That's not a great way to spend time.

8:32

But I do think that's actually what a lot of people are gonna wanna watch.

8:36

So, if you say, like, total Internet usage,

8:40

it's gonna probably be optimising

8:42

for, you know, neurotransmitter generation.

8:46

Like, there's somebody getting a kick out of it.

8:50

But it becomes like a drug type of thing.

8:55

But I'm not really after...

8:58

My goal is not to do that.

9:00

I guess I could do that if I wanted to,

9:03

but, um, I...

9:05

I just wanna really have a global platform that brings together...

9:11

Like I said, it becomes close to, sort of, a collective consciousness

9:16

of humanity as possible.

9:21

And one of the things that we've introduced, um,

9:24

for example, is automatic translation.

9:30

'Cause I think it would be great to bring together what people say

9:34

in many different languages, and...

9:38

but automatically translated for the recipient.

9:41

So, you have the collective consciousness,

9:42

not just of, say, people in a particular language group,

9:46

but you have the thoughts of people in, you know,

9:51

every language group.

9:53

And why is that important, Elon?

9:55

Collective consciousness, to have one platform?

9:58

I guess...

10:02

Yeah, why is that important?

10:12

I guess it's-- You could also say, like, why...

10:18

You know, if you consider humans,

10:19

like, humans are composed of around 30 to 40 trillion cells.

10:29

You know, there's trillions of synapses in your mind.

10:37

But there's not-- The why of it, I mean, I guess,

10:40

it's just so we can increase...

10:45

our understanding.

10:47

Increase our...

10:51

our understanding of the universe.

11:00

I guess I had this...

11:02

sort of question about what's the meaning of life, you know?

11:09

Why is anything important?

11:15

You know, why are we here?

11:18

What's the origin of the universe? What is the end?

11:24

What are the questions that we don't even know to ask?

11:30

And probably the questions we don't even know to ask

11:32

are the most important ones.

11:36

So, I'm just trying to, I guess, understand what's going on.

11:39

What is going on in this reality?

11:44

Is this reality?

11:50

And where did you get when you asked, "What is the point of life?"

11:56

Yeah, so, I...

11:59

I came to the conclusion that...

12:02

which is somewhat, in the Douglas Adams

12:06

Hitchhiker's Guide to the Galaxy school of thought,

12:08

-which is-- -42.

12:10

Yeah, you know, he sort of...

12:13

Hitchhiker's Guide to the Galaxy

12:15

is like a book on philosophy disguised as humour.

12:17

-Yeah. -And...

12:20

That's where you get the--

12:22

You know, Earth turns out to be this computer to understand,

12:26

to get to figure out the answer of the meaning of life.

12:29

And it comes up with the answer of 42.

12:31

But then, it's like, "What does 42 mean?"

12:36

And it turns out, well, actually,

12:37

the hard part is the question, not the answer.

12:42

And for that, you need a much bigger computer than Earth.

12:45

So, basically, what Douglas Adams was saying is that

12:47

we actually don't know how to frame the questions properly.

12:52

And so, I think by expanding the scope and scale of consciousness,

12:55

we can better understand what questions to ask

12:59

about the answer that is the universe.

13:03

Do you believe the collective consciousness of society--

13:10

I was watching this movie recently called the Gladiator.

13:13

-Russell Crowe. Have you seen it? -Yeah, yeah.

13:16

In Gladiator, in Rome,

13:18

when people are fighting,

13:21

and the crowd is cheering when people kill each other...

13:27

the collective is very much like the mob.

13:31

It doesn't have nuance in its opinion, per se.

13:37

That's this particular kind of mob.

13:38

I mean, they're sort of going there to see people kill each other, you know?

13:42

Do you suspect the society we live in today is very different?

13:45

We don't, generally-- At this point, we don't...

13:50

you know, go and watch people kill each other.

13:52

[laughter]

13:54

Maybe some kind of euphemism of that.

13:57

-Sports, I suppose. -Mm.

14:00

So, people do sports without--

14:03

where teams attempt to defeat each other,

14:05

-but minus the death. -Right.

14:10

Just going back to the consideration of a human.

14:15

We all started out as one cell, but now, we are...

14:19

over 30 trillion cells.

14:26

But I think most people feel like they're one body.

14:30

Like, you know, usually,

14:32

your right hand's not fighting your left hand type of thing, you know?

14:35

They just sort of cooperate.

14:38

Your mind is...

14:42

you know...

14:46

just a vast number of neurons.

14:48

But most of the time, it doesn't feel like there's, you know,

14:52

a trillion voices in your brain. Hopefully not.

14:57

So, there's clearly more that happens

15:02

when you have trillions of cells

15:06

working as a cellular collective than, say, one cell.

15:11

Or a small,

15:14

you know, small multicellular creature.

15:16

There's clearly something different that happens.

15:19

Like, you can't talk to a bacteria, you know?

15:21

-Yeah. -Yeah. It's very silent.

15:25

They just sort of wiggle around, and...

15:27

From their perspective, I don't know.

15:29

I just thought of what is life like from the perspective of an amoeba, you know?

15:34

But I know you can't talk to an amoeba. Like, they don't talk back.

15:37

But you can talk to humans.

15:39

So, there's just something,

15:42

obviously, qualitatively fundamentally different for humans.

15:48

Once you have a large number of cells,

15:49

and, you know, sufficiently large brain type of thing,

15:54

there's... you can now talk to humans.

15:57

And they can say things, they can produce things.

16:02

But bacteria are not gonna produce a spaceship, for example.

16:07

But humans can.

16:08

So I think there's something qualitatively different

16:12

that also happens when there's a collection of humans.

16:15

In fact, it's safe to say that a single human

16:17

cannot make a spaceship.

16:18

I cannot make a spaceship by myself.

16:20

But with a collection of humans, we can make spaceships.

16:25

So, there's something, obviously,

16:27

qualitatively different about a collection of humans.

16:33

In fact, it would be impossible for me to learn all of the areas of expertise.

16:38

There wouldn't be enough time in one lifetime

16:40

to even learn all the things before I was dead.

16:46

So, you really fundamentally have to have a collection of humans to make a rocket.

16:52

Then, I think there are probably some other scaling...

16:57

qualitative scaling things that happen when you have groups of humans.

17:02

And then, if the quality of the interaction,

17:06

or the quality of the information flow...

17:10

the better it is,

17:12

the more the human collective will achieve.

17:17

And, like I said, I'm just curious about the nature of the universe.

17:21

And I think if we-- It's safe to say,

17:24

like, if we increase the scope and scale of consciousness,

17:29

we're much more likely to understand the nature of the universe

17:32

than if we reduce it.

17:36

Is that a bit like spirituality?

17:37

A lot of people talk to me about spirituality.

17:40

Right.

17:42

I still don't know what it actually means.

17:43

Like, I keep asking them, "What do you mean?"

17:45

Yeah. "What do you mean?"

17:47

I mean, a lot of people have spiritual feelings.

17:51

Right.

17:54

And I wouldn't try to deny that those spiritual feelings are real to them.

17:59

But it's...

18:02

It doesn't entirely translate.

18:03

Just because somebody else has a spiritual feeling

18:05

doesn't mean that I would have that spiritual feeling.

18:12

You know, I tend to be kind of physics-pulled,

18:14

which is, like, if something has predictive value, then...

18:19

I'll pay more attention to it than if it doesn't have predictive value.

18:23

Right.

18:25

So, you know, physics, I would say,

18:28

is the study of that which has predictive value.

18:31

I think it's a pretty good definition.

18:33

My primary job, Elon, is a stockbroker and stock investor.

18:37

-Okay. -There is no predictive value.

18:39

Nobody knows what will happen tomorrow.

18:41

Well, but I think you can generally say, you know, that...

18:48

If it's long-term for a company, then you can say, like,

18:54

"Do you like the products or services of that company?

18:58

And is it likely to... Do you like the product roadmap?

19:02

Do you like-- It seems like they make great products

19:05

and they're likely to make great products in the future."

19:08

If that's the case,

19:09

then I would say that's probably a good company to invest in.

19:14

And I think you also want to believe in the team.

19:17

So if you're like, "Well, that's a talented and hardworking team,

19:20

they make good products today,

19:21

they seem to be still motivated to make things in the future."

19:24

Then I'd say that's a good company to invest in.

19:27

Fair point.

19:28

Yeah, and now, that...

19:32

That won't solve for the daily fluctuations

19:34

which happen, and, sometimes, are pretty extreme.

19:38

But over time, that is the right way to invest in stocks.

19:44

Because a company is just a group of people

19:46

assembled to create products and services.

19:48

So you have to say, "Well, what other--

19:50

How good are those products and services?

19:52

Are they likely to continue to improve in the future?"

19:54

If so, then you should buy the stock of that company

19:57

and then don't worry too much about the daily fluctuations.

20:00

Right.

20:02

What's got you most excited now, Elon, in terms of all that you're building?

20:07

You're doing so much.

20:09

So let me just preface and contextualise who is watching this.

20:14

Our audience is largely wannabe entrepreneurs in India.

20:19

Okay.

20:21

Really ambitious, really hungry, want to take the risk and build something.

20:27

And I feel like all of us have so much to learn from you

20:31

because you've done it so many times over in so many different domains.

20:33

Yeah.

20:35

So, we will speak to them today,

20:37

and I will try and centre all my questions in that direction

20:40

so they can take advantage of this conversation

20:43

and maybe start-- take a chance and build something.

20:47

Okay, sure.

20:52

Yeah, I guess the most important thing to do is just...

20:57

make useful products and services.

21:02

Yeah.

21:04

Which one of all the products and services that you're building

21:07

has got you most excited today?

21:13

Well, I think that there's increasingly a convergence, actually,

21:16

between SpaceX and Tesla and xAI.

21:21

In that, if the future is solar-powered AI satellites,

21:25

which it pretty much needs to be in order to...

21:30

In order to harness a non-trivial amount of the energy of the sun,

21:35

you have to move to solar-powered AI satellites in deep space...

21:40

which, somewhat, is a confluence of Tesla expertise and SpaceX expertise.

21:47

And xAI on the AI front, so...

21:52

it does feel like, over time, there's somewhat of a convergence there.

21:56

But all the companies are doing great things.

22:00

Very proud of the teams, they do great work.

22:02

So, we're making great progress with Tesla on the autonomous driving.

22:08

I don't know if you've tried the self-driving?

22:10

-Mm-mm. -Have you tried it?

22:11

I've tried it in the Waymo, not in the Tesla.

22:13

Yeah, it's worth trying.

22:16

We actually have it here in Austin.

22:18

-So you can, like... -Yeah, I'd love to try it.

22:19

You can literally just download the Tesla app,

22:21

and I think it's open to anyone.

22:25

-Yeah. -Definitely try it out.

22:26

You know how it goes.

22:30

But we've made a lot of progress with electric vehicles,

22:34

with battery packs and solar,

22:37

and very much so with self-driving.

22:41

So, basically, real-world AI.

22:44

Tesla is the world leader in real-world AI, I would say.

22:50

And then, we're gonna be making this robot, Optimus,

22:52

which is starting production, hopefully, summer next year at scale.

22:59

And I think that's gonna be pretty cool. That'll be like--

23:01

I think everyone's gonna want their own personal C-3PO, R2-D2,

23:06

you know, a helper robot. Like, it would be pretty cool.

23:10

And then, SpaceX is doing great work with the Starlink programme,

23:16

providing low-cost, reliable Internet throughout the world.

23:21

And hopefully, India.

23:23

We'd love to be operating in India. That would be great.

23:26

We're operating in 150 different countries, now, with Starlink.

23:29

Can you give me a bit about Starlink and how the tech works?

23:33

'Cause somebody I was speaking to...

23:35

I don't know if you know this company called Meter out of San Francisco.

23:39

They're trying to replace network engineers.

23:41

-But-- -Don't know it, no.

23:43

So, he was telling me about how, in densely populated areas,

23:47

Starlink works differently

23:49

than it might be in a place with not as many people.

23:53

Can you explain how it works?

23:55

Yeah, so, Starlink...

23:57

There's several thousand satellites in low-Earth orbit,

23:59

and they're moving around 25 times the speed of sound in these...

24:05

You know, they're zipping around the Earth, basically, and...

24:10

they're at an altitude of about 550 kilometres,

24:15

which is called, generally, low-Earth orbit.

24:17

Because they're at low-Earth orbit there, the latency is low.

24:22

Like, the distance, because the distance is not that far

24:25

compared to a geostationary satellite at 36,000 kilometres.

24:31

So, you've got thousands of satellites providing

24:37

low latency, high-speed Internet throughout the world,

24:45

and they are interconnected as well.

24:46

So, there are laser links between the satellites,

24:49

so it forms, sort of, a laser mesh.

24:52

So that the-- Let's say if cables are damaged or cut,

24:57

like fibre cables, the satellites can communicate between each other

25:01

and provide connectivity even if the cables are cut.

25:07

So, for example, when the Red Sea cables were cut,

25:11

I think, a few months ago,

25:12

the Starlink satellite network continued to function without a hitch.

25:18

So, it's particularly helpful for disaster areas.

25:21

So, if an area has been hit with some kind of natural disaster,

25:25

floods or fires or earthquakes,

25:28

that tends to damage the ground infrastructure.

25:32

But the Starlink satellites still work, so...

25:35

And generally, whenever there's of natural disaster somewhere,

25:38

we always provide people with free Starlink Internet connectivity.

25:43

You know, we don't want to charge--

25:44

We don't want to take advantage of a tragic situation.

25:47

So, it's always, you know, if there's natural disasters,

25:52

we're like, "Okay, it's free during the natural disaster."

25:55

You know, we don't want to, say, like, you know...

25:59

put a paywall up while somebody's trying to get help.

26:02

That would be wrong.

26:04

So, it's a very robust system.

26:08

It's complimentary to ground systems

26:10

because the satellite beams work best in sparsely populated areas.

26:19

But because you've got a satellite beam, it's a pretty big beam,

26:24

and you have a fixed number of users per beam, so...

26:27

it tends to be very complimentary to the ground-based cellular systems,

26:32

because those are very good in cities,

26:34

because you've got these cell towers that are, you know,

26:37

only a kilometre apart type of thing, but...

26:42

But cell towers tend to be inefficient in the countryside.

26:45

So, in rural areas is where you tend to have the worst Internet

26:50

because it's very expensive and difficult to lay...

26:54

to do all the fibre-optic cables or to have high bandwidth cellular towers.

27:01

So, Starlink is very complimentary to the existing telecom companies.

27:09

It basically tends to serve the least served, which, I think, is good.

27:15

-That's... -Will that change tomorrow?

27:18

Like, today, as you explained, the beam is quite broad,

27:22

and it can't work in a densely populated area with high buildings, maybe.

27:27

But can that change, and tomorrow,

27:28

it becomes really efficient in a densely populated city

27:33

where it is competitive with the local network providers?

27:36

Unfortunately, the physics don't allow for that.

27:39

So, we're too far away.

27:43

So, at 550 kilometres, even if we try to reduce it, which...

27:47

About as low as we can go is about 350 kilometres,

27:50

still very far away.

27:52

You've just...

27:53

You can think of, like, a flashlight,

27:56

which is, you know, this flashlight's got a cone

27:59

and that cone is coming at, you know...

28:03

today, it's 550 kilometres.

28:05

In the future, we're trying to get down to 350 kilometres,

28:07

but we can't beat something that's one kilometre away,

28:10

which is the cell tower.

28:12

Physics is not on our side here.

28:14

So, it's not physically possible for Starlink

28:18

to serve densely populated cities.

28:21

Like, you can serve a little bit, maybe 1% of the population.

28:24

And, sometimes, people get-- Even in crowded cities,

28:28

there might be, you know, no fibre link up their road.

28:32

Like, sometimes, there's somebody on a cul-de-sac or something

28:34

or in a place...

28:37

In cities, there's sometimes underserved areas for random reasons.

28:41

And so, Starlink can serve, like I said,

28:44

maybe 1% or 2% of a densely populated city.

28:50

But it can be much more effective in, like I said,

28:53

in rural areas where the Internet connection is much worse.

28:56

And often, people either have, sometimes, no access to Internet

29:00

or it's extremely expensive or the quality is not very good.

29:06

If I were to ask you to wager a guess, Elon,

29:08

do you think India will go down the path of urbanisation like China did,

29:13

with more people moving in from rural economies to urban centres?

29:20

Or do you think we'll beat the trend?

29:21

Well, I suppose some amount of that has happened, right?

29:25

I mean, I'm curious to, sort of, ask you some questions as well.

29:29

'Cause, of course, isn't that the trend, or is it not the trend in India?

29:34

It is the trend, largely.

29:36

I think a little bit changed during COVID

29:39

when a lot of urbanisation slowed down and that was not organic.

29:43

It was very artificially manifested.

29:46

Right.

29:47

But one does question that with AI,

29:52

if productivity were to go up...

29:56

And I heard you speak about UHI instead of UBI.

30:00

Yeah. I think it will be Universal High Income.

30:03

In a world like that, I wonder if more people want to live in cities

30:07

which are always going to be more polluted

30:13

and not offer the quality of lifestyle that a rural environment might.

30:18

Well, I guess it's up to...

30:19

Some people want to be around a lot of people and some people don't.

30:23

It's gonna be, maybe, a matter of personal choice.

30:25

But I think in the future, it won't be...

30:27

I think it won't be the case that you have to be in a city for a job.

30:31

-Right. -'Cause I think...

30:33

My prediction is, in the future, working will be optional.

30:36

Right.

30:38

We seem to be moving from--

30:39

Not in India, but in some parts of the West,

30:42

from six days to five days to four days to three.

30:45

Not me.

30:46

[laughter]

30:49

I think, the Europeans.

30:50

Yeah, yeah.

30:55

Yeah, yeah, 'cause...

30:57

I mean, I think if you're trying to make a startup succeed

31:01

or you're trying to make a company do very difficult things,

31:05

then you definitely need to put in serious hours.

31:08

-I think that's how it goes. -Right.

31:11

And if we were to move from five to four to three days,

31:14

how do you think society changes?

31:16

When people have to work half the week, what do they do with the other half?

31:21

Well, I think it'll actually be that people don't have to work at all.

31:26

It may not be that far in the future.

31:28

Maybe only, I don't know, ten, I'd say less than 20 years.

31:33

My prediction is, in less than 20 years, working will be optional.

31:38

Working at all will be optional.

31:42

Like a hobby.

31:44

Pretty much.

31:46

And that would be because of increased productivity,

31:50

meaning people do not have to work?

31:52

They don't have to--

31:54

I mean, look, obviously,

31:56

people can play this back in 20 years and say,

31:58

"Look, Elon made this ridiculous prediction and it's not true."

32:01

But I think it will turn out to be true that, in less than 20 years,

32:06

but maybe even as little as, I don't know, ten or 15 years,

32:12

the advancements in AI and robotics will bring us to the point

32:18

where working is optional.

32:22

In the same way that, like, say,

32:24

you can grow your own vegetables in your garden

32:27

or you could go to the store and buy vegetables.

32:32

You know.

32:34

It's much harder to grow your own vegetables.

32:36

But some people like to grow their vegetables, which is fine.

32:40

But it'll be optional, in that way, is my prediction.

32:44

If one were to argue that humans are innately competitive

32:49

and everything is relative...

32:51

From the time of hunters,

32:53

somebody wanted to be the alpha hunter or the biggest farmer,

32:57

if everybody gets a universal high income and everybody has enough...

33:03

-What do you compete for? -Uh...

33:06

it would be relative, right?

33:07

Like, if we all had enough, enough is not enough.

33:13

Yeah, I guess-- I'm not exactly sure.

33:17

'Cause we're really headed into the singularity, as it's called,

33:22

which, you know, they refer to AI sometimes

33:25

as kind of like the black hole, like a singularity.

33:27

You don't know what happens after the event horizon.

33:29

It doesn't mean that something bad happens,

33:31

it just means you don't know what happens.

33:36

I'm confident that if AI and robotics continue to advance,

33:40

which, they are advancing very rapidly,

33:43

like I said, working will be optional,

33:47

and people will have any goods and services that they want.

33:53

"If you can think of it, you can have it" type of thing.

33:58

But then, at a certain point,

34:01

AI will actually saturate on anything humans can think of.

34:06

And then, at that point,

34:08

it becomes a situation where AI is doing things for...

34:13

AI and robotics are doing things for AI and robotics,

34:15

because they've run out of things to do to make the humans happy.

34:21

'Cause there's a limit, you know? You say, like...

34:24

People can only eat so much food, or...

34:29

But it's gonna be, I think...

34:30

"If you can think of it, you can have it," will be the future.

34:33

You know, the Austrian School of Economics,

34:36

if you go back in time, they were the digression from Adam Smith.

34:40

They talk about the marginal utility of everything.

34:44

Having one of something has value,

34:47

having two of the same thing has lesser value

34:49

and having ten of the same thing has no value.

34:51

Yes.

34:53

So, if we could have everything we wanted, maybe--

34:55

Like ten marshmallows, I mean, who wants that?

34:56

-Yeah. -[laughter]

34:59

One's plenty.

35:03

This is like the marshmallow test. You're like,

35:04

"You're gonna have two marshmallows later or one marshmallow now?"

35:07

And I'm like, "I'll have one marshmallow, I don't want two marshmallows."

35:09

-That's interesting. -[laughter]

35:12

What would you pick?

35:13

But I don't-- One marshmallow is enough.

35:16

I always question marshmallows as being like,

35:18

not the most, you know, the best candy, you know?

35:21

-Yeah. -[laughter]

35:23

I don't yearn for marshmallows.

35:25

-I think you're the best... -[laughter]

35:28

Who does?

35:30

You're the best testament to the marshmallow experiment.

35:33

-I think... -I suppose so.

35:34

Oh, well, I mean, I like delayed gratification, essentially.

35:37

-Yeah. -Yeah.

35:38

You're able to delay it more than most.

35:39

You know, I have a tattoo which says, "Delay gratification."

35:41

Yeah, wow, okay. What's this?

35:43

Okay, you're really taking the marshmallow test hard.

35:45

[laughter]

35:48

I feel like I can't remember.

35:49

When I'm trading or when I'm buying...

35:50

Delay gratification, yeah, yeah.

35:52

-It helps. -Wow, okay.

35:53

That's... That's commitment.

35:55

And it's pointing at me, so it reminds me of...

36:01

Okay, well, it's good advice.

36:03

I mean, you can't miss it.

36:04

-If you could get a... -[laughter]

36:06

If you could get a tattoo, what would you get?

36:09

I guess maybe my kids' names or something.

36:11

Right.

36:14

Why do you like the letter "X" as much as you do?

36:18

Well...

36:19

[laughs]

36:24

I mean, yeah, it's a good question, honestly.

36:27

Sometimes, I wonder what's wrong with me.

36:37

So, um...

36:39

I mean, it started off with, where, I think...

36:42

So, way back, ancient times, in '99.

36:46

[laughter]

36:48

The Precambrian era when there were only sponges...

36:55

there were only three one-letter domain names.

36:59

And I think it's X, Q, and Z.

37:02

And, uh...

37:03

And I was like, "Okay, I want to create this place

37:06

where it's the financial crossroads

37:10

or like the financial exchange, you know?"

37:16

Essentially, it's solving money from an information theory standpoint

37:20

where the current banking system

37:22

is a large number of heterogeneous databases

37:27

with batch processing that are not secure.

37:32

And if we could have a single database

37:37

that was real-time and secure,

37:41

that would be more efficient from a monetary--

37:44

from an information theory standpoint

37:46

than a large number of heterogeneous databases

37:50

that batch process very slowly and securely.

37:55

So, um...

37:57

So, that was sort of X.com way back in the day,

38:01

which kind of became PayPal.

38:07

And then...

38:10

And it was acquired by eBay. And then, eBay--

38:12

Someone reached out from eBay and said,

38:13

"Hey, do you want to buy the domain name back?

38:15

And I was like, "Sure."

38:17

And so I had the domain name for quite a while.

38:22

And then...

38:25

And then, yes...

38:28

Then I was like, "Well, maybe this--

38:31

Acquiring Twitter would also be an opportunity

38:33

to revisit the original plan of X.com,

38:39

which is to create this...

38:42

this clearinghouse of financial transactions."

38:46

Basically, to create a more efficient money database,

38:51

is a way to think about it.

38:54

Like, money is really an information system for labour allocation.

39:01

People sometimes think money is power in and of itself,

39:03

but it doesn't, really--

39:05

If there's no labour to allocate, it's meaningless.

39:08

So if you were to be on a desert island with a trillion dollars or whatever...

39:13

-Now you have that. -...it doesn't matter.

39:15

Oh, yeah, right. Why speculate when you can be real?

39:22

I just hope I don't end up on a desert island.

39:25

It's not gonna be very useful to me.

39:29

But it illustrates my point that if you're stranded on a desert island

39:34

with a trillion dollars,

39:35

it's not useful, because there's no labour to allocate.

39:40

You just allocate yourself, so...

39:45

So, anyway, so, it's a long-winded way of saying

39:49

that it's...

39:51

It's just really, like...

39:54

I'm just kind of slowly revisiting this idea that I had 25 years ago

39:59

to create a more efficient, um...

40:06

money database.

40:09

And if that's successful, people will use it,

40:12

and if it's not successful, they won't use it.

40:15

And then, I also like the idea of having a unified...

40:20

app or website or whatever, where you can do anything you want there.

40:28

You know, China has this with WeChat,

40:31

sort of, somewhat, where you can exchange information,

40:35

you can publish information, you can exchange money.

40:42

People kind of live their life on WeChat in China.

40:45

And it's quite useful,

40:47

but there's no real WeChat outside of China.

40:51

So, it's, like...

40:53

It's kind of WeChat++, I'd say, is the idea for X.

40:57

Anyway, so, then, Space Exploration Technologies

41:01

is the full name of the company.

41:03

But I was like, "That's too much, that's a mouthful."

41:06

So I was like, "We'll just call it SpaceX,"

41:08

like FedEx for space.

41:11

It just happens to have the X in the, you know...

41:13

'Cause exploration has an X, but, you know...

41:17

And I was like, "Well, I like capitalising the X just artistically, so..."

41:23

So, then, uh, that's why it's SpaceX, but...

41:26

And then, what else we got? I got a kid.

41:31

He's called X, too,

41:33

but his mother's the one that named him "X".

41:37

-Oh? -[laughter]

41:38

And I said, "You know, people are really gonna think I've got a thing about X

41:41

if we name our kid "X" too, you know?"

41:43

And I said to her, like, "Look, I do have X.com, you know."

41:48

[laughter]

41:50

"So, people are gonna really think

41:52

I've got somewhat of a fetish for this letter."

41:55

But she said no, she likes X and she wants to call him X.

41:58

I'm like, "Okay."

41:59

Is this a new thing, or have you had it growing up?

42:02

No, I'm saying it's somewhat of a coincidence.

42:06

Okay.

42:08

Like, not everything's called X.

42:09

I mean, Tesla isn't, there's no Xs in Tesla.

42:11

Yeah.

42:15

What do you think money will be in the future, Elon?

42:19

I think, long term...

42:24

I think money disappears as a concept, honestly.

42:27

It's kind of strange, but...

42:31

But in a future where anyone can have anything,

42:35

I think you no longer need money as a database for labour allocation.

42:43

If AI and robotics are big enough to satisfy all human needs,

42:48

then money is no longer...

42:52

Its relevance declines dramatically.

42:54

I'm not sure we will have it.

42:58

You know, the best imagining of this future that I've read

43:02

is from Iain Banks, the Culture books.

43:07

So I recommend people read the Culture books.

43:10

In this sort of far future of the Culture books, there's...

43:15

they don't have money, either.

43:18

And everyone can pretty much have whatever they want.

43:22

There's still some fundamental currencies, if you will,

43:28

that are physics-based.

43:30

So, energy is...

43:32

Energy is the true currency.

43:34

This is why I said Bitcoin is based on energy.

43:37

You can't legislate energy. You can't just, you know...

43:41

pass a law and suddenly have a lot of energy.

43:45

It's very difficult to generate energy,

43:50

or especially to harness energy in a useful way to do useful work.

43:54

So, I think that probably...

43:59

Probably we won't have money and probably we'll just have energy,

44:04

you know, power generation as the de facto currency.

44:11

So, I mean, I think one way to frame civilisational progress

44:15

is the percentage completion on the Kardashev scale.

44:20

So, you know,

44:22

Kardashev I is what percentage of a planet's energy

44:26

are you successfully turning into useful work?

44:30

And I'm maybe paraphrasing here a little bit,

44:31

but the Kardashev II would be what percentage of the sun's energy

44:36

are you converting into useful work?

44:40

Kardashev III would be what percentage of the galaxy

44:43

are you converting into useful work?

44:49

So, things really, I think, become energy-based.

44:53

But if you have solar-powered AI satellites,

44:56

energy is also free and abundant,

44:58

'cause we'll never be able to utilise all the solar energy available to us.

45:03

So, it can't be a store of wealth, essentially, in that lens, can it?

45:09

You know, there's not really...

45:10

You can't really store wealth in, like...

45:13

You can only...

45:19

You can accumulate numbers in the--

45:21

Currently, you can accumulate numbers in a database that allow you to...

45:32

To some degree, to incent the behaviour of other humans in particular directions.

45:37

And I guess people call that wealth.

45:40

But again, if there's no humans around, there's no--

45:43

Wealth accumulation is meaningless.

45:45

This is a digression, but if you were to consider...

45:48

food as the energy for a human to thrive...

45:51

Yeah, food is energy.

45:53

It's literally got calories, just means energy.

45:55

...so, can a farm, which is self-sustaining, be a commodity that is...

46:04

I'm not sure what that means, but, you know, there's...

46:11

At a certain point, you do complete the cycle where...

46:16

I think, at a certain point...

46:19

you decouple from the sort of conventional economy

46:23

if you have, um...

46:26

AI and robots producing chips and solar panels...

46:35

and mining resources in order to make chips and robots,

46:39

in order to make...

46:41

You sort of complete that cycle.

46:44

Once that cycle is...

46:45

Once that cycle is complete,

46:49

I think that's the point at which you decouple from the monetary system.

46:53

Is that the way forward for the US by virtue of...

46:59

how much debt they have today?

47:01

Do they deflate away their currency

47:03

and transition into this new form and lead that push,

47:07

because it would make more sense to them?

47:10

Well, in this future that I'm talking about,

47:12

the notion of countries becomes, sort of, anachronistic.

47:18

Do you believe in it today?

47:19

-Do you believe in countries-- -Yeah, I certainly believe in it today.

47:22

And I want to just separate something that I...

47:26

Like, these are just what I think will happen based on what I see,

47:29

as opposed to, "I think these are fundamentally good things

47:32

and I'm trying to make them happen."

47:34

I think this would happen with or without me,

47:38

-whether I like it or not. -Right.

47:41

As long as civilisation keeps advancing,

47:44

we will have AI and robotics at very large scale.

47:53

I think that that's pretty much the only thing

47:55

that's gonna solve for the US debt crisis.

47:59

'Cause, currently, the US debt is insanely high,

48:03

and the interest payments on the debt exceed the entire military budget

48:08

of the United States, just the interest payments.

48:11

And that's...

48:12

at least, in the short term, gonna continue to increase.

48:15

So, I think, actually,

48:18

the only thing that can solve for the debt situation

48:21

is AI and robotics.

48:23

But it will more than...

48:26

It might cause...

48:28

See, I guess it probably would cause significant deflation, because...

48:34

deflation or inflation is really the ratio of goods and services produced

48:39

to the change in the money supply.

48:41

So, like, so if goods and services output increases faster than the money supply,

48:46

you will have deflation.

48:47

If goods and services decreases--

48:49

If real goods and services output increases slower than the money supply,

48:54

you have inflation.

48:56

It's that simple.

48:57

People sometimes try to make it more complicated than that, but it just isn't.

49:02

So, if you have AI and robotics

49:05

and a dramatic increase in the output of goods and services,

49:08

probably, you will have deflation.

49:10

That seems likely.

49:13

Because you simply won't be able to increase the money supply

49:16

as fast as you can increase the output of goods and services.

49:19

-With all-- -This fly is a real hazard here.

49:23

Should we do something about it?

49:26

Maybe we can convince it to go somewhere else.

49:29

Entice it elsewhere.

49:31

-It actually left, I think. -Great.

49:32

Oh, now it's back.

49:38

Maybe it's attracted to the light.

49:41

-If deflation is-- -Maybe it wants some coffee.

49:44

Mine is over.

49:47

If deflation is inevitable because of AI,

49:52

-why do we have-- -It's most likely the case, yeah.

49:54

Right.

49:55

Why do we have inflation again all over in society today?

49:59

Has AI not led to increased productivity yet?

50:03

It's not--

50:04

AI has not yet made enough of an impact on productivity

50:07

to increase the goods and services faster than the increase in the money supply.

50:12

So, the US is increasing money supply

50:15

quite substantially with deficits

50:19

that are on the order of two trillion dollars.

50:21

So, you have to have...

50:27

goods and services output increase more than that

50:29

in order to not have inflation.

50:31

So we're not there yet,

50:32

but if you say how long would it take us to get there,

50:35

I think... it's three years.

50:39

Probably three years before...

50:41

In three years or less...

50:44

my guess is goods and services output will exceed the rate of inflation.

50:48

Like, money...

50:50

Goods and services growth will exceed money supply growth

50:53

in about three years.

50:55

Maybe after those three years,

50:57

you have deflation and then interest rates go to zero

51:00

and then the debt is a smaller problem than it is.

51:03

-Yes. -Right?

51:04

That's most likely the case.

51:08

You spoke about being in a simulation earlier.

51:10

I love The Matrix.

51:12

Yes, yes.

51:13

If you were to be a character from The Matrix,

51:15

who would you be?

51:17

Well, there's not that many characters to pick from, you know?

51:22

Hopefully not Agent Smith.

51:23

[laughter]

51:25

He's my hero.

51:30

I mean, Neo's pretty cool.

51:33

The Architect is interesting.

51:36

The Oracle.

51:37

There's the Oracle...

51:40

Sometimes, I feel like I'm an anomaly in the Matrix.

51:44

That is Neo.

51:46

Yeah.

51:47

Do you believe you're in a Matrix, though?

51:49

Like, actually believe?

51:52

I think you have to just think of these things

51:55

as probabilities, not certainties.

51:58

There's some probability that we're in a simulation.

52:00

What percentage would you attribute to that?

52:08

Probably pretty high, I would say it's pretty high.

52:11

-Yeah? -Yeah.

52:13

So, one way to think of this is to say,

52:16

if you look at the advancement of video games,

52:18

in our lifetime, or at least in my lifetime,

52:20

it's gone from very simple video games with...

52:23

Where you've got, like, Pong, you've got two rectangles in a square,

52:27

just batting it back and forth, to...

52:32

photorealistic, real-time games

52:37

with millions of people playing simultaneously.

52:43

And that's happened just in the span of 50 years.

52:46

So, if that trend continues,

52:49

video games will be indistinguishable from reality.

52:51

Right.

52:53

And we're also gonna have very intelligent characters,

52:58

like non-player characters, in these video games.

53:01

Think of how sophisticated the conversations are

53:03

you can have with an AI today,

53:05

and that's only gonna get more sophisticated.

53:09

You'll be able to have conversations that are...

53:14

more complex and more sophisticated than any... almost any human conversation.

53:21

Maybe any.

53:24

So, then... So, you have--

53:26

So, the future, if civilisation continues, will be millions, maybe billions, of...

53:35

photorealistic, like, indistinguishable-from-reality video games

53:39

with characters in those video games that are...

53:45

very deep,

53:47

and where the dialogue is not pre-programmed.

53:55

That's for sure what's gonna happen.

53:57

In this level of the simulation, if you could call it.

54:00

So, then, what are the odds that we're in base reality,

54:06

and that this has not happened before?

54:11

If I were to buy into that, and assume that we are in a simulation,

54:17

as Neo of the story,

54:19

what do you know that I don't and I can learn from?

54:23

I think, most likely...

54:26

outside the simulation would be less interesting than in the simulation,

54:30

'cause you're most likely a distillation of what's interesting,

54:34

because that's what we do in this...

54:36

that's what we do in our reality.

54:39

And then...

54:42

I do also have a theory which is, like, the most interesting outcome,

54:46

is the most likely outcome as seen by a third party...

54:51

the gods or god of the simulation.

54:57

Because when we do simulations, when humans do simulations,

55:02

we stop those simulations that are not interesting.

55:07

So, if SpaceX is doing simulations of rocket flights...

55:14

the boring ones, we discard,

55:17

because they're not-- they're just not...

55:19

We don't learn anything from those.

55:21

Or when Tesla's doing simulations for self-driving,

55:27

Tesla's actually looking for the most interesting corner cases,

55:30

because the normal stuff, we already have plenty of data

55:36

on driving on a straight road on a sunny day.

55:40

We don't need more of that.

55:41

We need heavy weather conditions on a small, windy road

55:45

with two cars that are coming at each other

55:49

with an almost head-on collision.

55:50

We need weird stuff, basically, interesting stuff.

55:55

So, I think that, from a Darwinian's perspective,

55:59

the simulations most likely to survive

56:01

are gonna be the ones that are the most interesting simulations,

56:06

which, therefore, means that the most interesting outcome

56:10

is the most likely.

56:12

And the people who simulated our world, if one were to extrapolate,

56:18

they themselves might, in turn,

56:20

-be in another simulation. -Yes.

56:22

And there could be many layers of simulation.

56:24

Yes.

56:25

Beyond all of these layers of simulation, do you think there's something?

56:29

I read somewhere that you used to ascribe to Spinoza's God, in a way.

56:37

No man in the sky.

56:38

Well, I was really just pointing out that you don't have to have...

56:43

One of the things Spinoza was saying

56:45

is that you can have morals in the absolute.

56:48

You don't need to have morals to be handed to you.

56:51

You know...

56:53

It's like, the question is,

56:54

can morality exist outside of a religious context?

56:58

And Spinoza was arguing that it can.

57:01

Wasn't he arguing for "The laws of nature

57:04

should be where we seek our laws of morality from,"

57:07

to a certain extent?

57:08

Yeah.

57:10

But when I think of laws of nature, I see a tiger eat a deer and a...

57:15

So, in Spinoza's morality, that's fair game, right?

57:23

Well, um...

57:29

I think there's a lot of things you can take from Spinoza,

57:32

but the only point I was making in referencing Spinoza

57:36

was that you can have a set of morals that...

57:42

that make society functional and productive

57:47

without...

57:50

You don't necessarily have to have a religious doctrine for that.

57:59

Yeah, I think that's the main thing I was trying to say there.

58:03

I don't think people just-- Like, if somebody is, it doesn't--

58:09

If there's not a commandment not to kill, like, people,

58:15

it doesn't mean somebody, without that, they will run around murdering people.

58:18

You know?

58:20

Like, you don't have to have a commandment not to kill...

58:23

Have you played GTA?

58:25

...religious edict to run around killing people.

58:27

I actually...

58:29

I've only played a little bit of GTA 'cause I didn't like the fact that...

58:34

Like, in GTA V, you literally can't progress unless you kill the police.

58:40

And I'm like, "This doesn't work for me."

58:45

I actually don't like killing the NPCs in the video games.

58:50

-That's not my thing, you know? -Right. Right.

58:54

So, actually, I didn't like GTA 'cause...

58:56

I actually stopped when it said

58:58

the only way to proceed is to shoot at the police.

59:00

I'm like, "I don't wanna do that."

59:01

Maybe that's why us as the NPCs of our simulation are not dying.

59:05

Maybe.

59:11

You know, anyway, I think you can just, sort of, say,

59:13

there's some common sense things that, you know, any civilisation...

59:20

that runs around, you know, where people just murder each other wantonly

59:23

is not gonna be a very successful one.

59:28

You seem to be changing a bit towards religion, though.

59:31

Faith. Like, off late, you've said a bunch of things which are pro-religion, almost.

59:36

Not pro-religion...

59:40

but on those lines.

59:42

I mean, I think, are there religious...

59:45

Are there principles in religion that make sense?

59:48

Yeah, I think there are.

59:51

Is it easier for our simulation to have...

59:56

a pro-religion projection for the world that we live in?

60:00

We become more relatable? It's easier?

60:02

Well, which religion, though?

60:04

Any, depending on where you live.

60:07

So pick one, you know.

60:11

It's pretty rare that kids have said, you know, "Which religion would you like?"

60:15

It's... [laughs] It's pretty rare.

60:17

I don't know too many situations where kids were offered, like, you know...

60:24

You know, like, "What do you wanna major in" type of thing.

60:28

It's usually, like, you get given a religion

60:32

by your parents and your community.

60:36

So, you know.

60:41

But, you know, I mean, I think,

60:44

there's good things in all religions

60:49

that are good principles.

60:54

You can, sort of, read any religious text

60:57

and say, "Okay, this is a good principle. This is gonna be...

60:59

This is gonna lead to a better society, most likely," you know?

61:03

So, I mean, in Christianity, you just, sort of, "Love thy neighbour as thyself,"

61:09

which is, you know, have empathy for your fellow human beings

61:12

is a good one, I think,

61:13

for a good society, you know.

61:17

Basically, just consider the feelings of others

61:21

and treat other people as you would like to be treated.

61:25

If you had to redraw, re-sketch the world, Elon,

61:29

think morality, politics, economy,

61:34

how would you change the world we live in today?

61:40

If you had to have Elon's simulation of things.

61:45

Well, overall, I think the world is pretty great right now.

61:48

I mean, it's...

61:50

Anyone who thinks that, like, today's world is not that great,

61:56

I think they're not gonna be excellent students of history, 'cause if you...

62:01

[laughs]

62:03

If you read a lot of history, you're like,

62:05

"Wow, there's a lot of misery back then," you know?

62:08

I mean, it used to be that people would be dropping dead

62:10

of the plague all the time, you know?

62:12

-Par for the course, you know? -Yeah.

62:14

Just be like...

62:16

A good year back in the day would be, like, not that many people died

62:20

of the plague or starvation or being killed by another tribe.

62:24

It's like, "That was a good year.

62:26

We only lost 10% of the population," you know?

62:28

-Yeah, like... -[laughter]

62:30

I think, like, 100 years ago, we lived up until 35 or 40, right?

62:33

-We had very high infant mortality. -Yeah.

62:37

So, like, you do have a few people that would live to an old age,

62:41

but, you know, not that long ago, 100 years ago,

62:45

if you got, like, some minor infection, they didn't have antibiotics.

62:50

So, you just, like, kicked the bucket.

62:52

Because you, you know, drank some water

62:55

that had dysentery in it, that was it, curtains, you know?

63:00

You just die of diarrhoea.

63:01

-Maybe that's why people... -You just literally die.

63:03

Everyone's like, "That's miserable."

63:06

Maybe that's why people had as many kids as they did back then.

63:09

I mean, if you didn't, then, you know...

63:12

You know, like, half the kids would die, type of thing.

63:15

-So... -You have a lot of kids now.

63:17

Yeah.

63:19

-With multiple partners. -Like an army.

63:21

Yeah.

63:22

I'm trying to get an entire Roman legion.

63:29

So, yeah.

63:32

Well, I have, like, some older kids

63:34

that are, you know, adults, essentially, you know?

63:36

-And then, a bunch of younger kids. -Mm.

63:42

Do you still believe in the concept of... Not still...

63:45

Do you believe that the concept of one child, one mother, one father works?

63:50

I think that it does work for most people, yeah.

63:54

Right.

63:56

Like, that's, you know, something like that is gonna be generally the...

64:01

That's what works for most people.

64:05

You know, so...

64:07

Changing, though?

64:09

I'm not sure if you know this, but, like...

64:13

you know, my partner, Shivon, she's half Indian.

64:16

-I don't know if you know that. -I didn't know that.

64:18

-Yeah, yeah. -Yeah?

64:19

And, one of my sons with her is...

64:23

his middle name is Sekhar, after Chandrasekhar.

64:27

-Wow. -Yeah.

64:29

Very interesting.

64:32

Did she spend any time in India? Shivon?

64:35

-No, she grew up in Canada. -[laughter]

64:39

You mean origins.

64:41

-Sorry? -Ancestry, like...

64:45

Her parents or grandparents were from there.

64:48

Yes, yes, yes. Her father...

64:52

I mean, she was given up for adoption when she was a baby.

64:56

Wow.

64:58

I think her father was like...

65:00

Like an exchange student at the university or something like that,

65:04

I'm not sure of the exact details, but...

65:07

You know, it's just the kind of thing where... I don't know, she was...

65:11

given up for adoption.

65:17

Yeah, but she grew up in Canada.

65:19

Would you adopt kids, Elon?

65:21

You know, I definitely have my hands full right now.

65:27

So, no, I'm not opposed to it, but it's like, you know...

65:32

I do wanna be able to spend some time with my kids, you know.

65:36

Yeah.

65:39

You know, right before coming here, I was with...

65:43

you know, with my kids.

65:47

So, just, you know, seeing them before bedtime, that kind of thing.

65:51

So, you know, beyond a certain number,

65:53

it's like, it's kind of impossible to spend time with them.

65:56

But, like I said, my older kids, they're very independent.

66:02

You know, they're in university and...

66:07

So, they're...

66:08

You know, especially sons, when they get past a certain age,

66:12

like, they're very independent, you know.

66:15

It's like, most boys don't talk to their...

66:20

They don't spend a lot of time with their parents after age 18, you know.

66:26

So, I see them once in a while, but they're very independent.

66:30

So then...

66:33

you know, I can only have enough kids on the young side

66:37

that, like, it's where it's humanly possible to spend time with them.

66:43

Any views on the future of marriage, family?

66:49

What do you think happens to people having lesser kids everywhere, including India?

66:54

I think our replenishment rate is down to--

66:57

-Right. -I mean, our fertility--

66:59

It dropped below replacement rate, last year.

67:00

-Below 2.1. -Yeah.

67:03

What do you think happens tomorrow?

67:04

Does the world just get older, and then there is a phase where the world,

67:09

again, is replenished, but with a smaller population than we had to begin with?

67:20

I mean, I do worry about the population decline.

67:23

This is a big, big problem.

67:25

Why is that?

67:27

Well, I don't want humanity to disappear.

67:30

But a "decline" and "disappear" are completely different things, right?

67:33

Well, if the trend continues, we disappear.

67:36

But also, going back to my philosophy, if you will,

67:40

which is that if we want to expand consciousness,

67:43

then fewer humans is worse, because we have less consciousness.

67:52

Do you think consciousness will go up by virtue of the number of people in there?

67:57

Yes.

68:00

I mean, just like consciousness increases from a single-celled creature to,

68:05

you know, a 30 trillion-celled creature.

68:10

We're more conscious than a bacteria. At least, it seems that way.

68:16

So, a larger human population will have increased consciousness.

68:22

We're more likely to understand

68:25

the answers to the nature of the universe

68:30

if we have a lot more people than if we have fewer.

68:38

Right.

68:40

I don't have kids.

68:42

Well, it's-- Maybe you should.

68:45

Yeah.

68:46

[laughter]

68:48

A lot of people tell me I should.

68:50

-You won't regret it. -Hmm.

68:52

What's the best thing about having kids?

68:55

Well, I mean, you've got this...

69:02

I mean, you've got this little creature that loves you,

69:05

and you love this little creature.

69:11

I don't know, you kind of see the world through their eyes as they...

69:16

you know, as they grow up, and their conscious awareness increases.

69:21

You know, from a baby that has no idea what's going on,

69:24

can't survive by itself, can't even walk around,

69:26

can't talk, to, you know, they start walking,

69:31

then talking, and then having interesting thoughts.

69:40

But, yeah, I mean, I think we fundamentally have to...

69:45

have kids or grow extinct, you know?

69:48

It's like a...

69:49

Is there any ego in having a child?

69:52

I often think of this when I see my friends with their kids.

69:57

They're all seeing a reflection of themselves in their children.

70:01

-It's almost like-- -Well, yeah, I mean, it's 'cause

70:04

apple's not gonna fall that far from the tree, you know?

70:09

-Or something's wrong. -Right.

70:10

[laughter]

70:13

You're like, "Wait a second."

70:20

-Yeah. -Yeah.

70:24

I'll give you the example of a friend of mine who has a child,

70:28

and each time the child does something good...

70:30

-Yeah. -...there is almost a sense of

70:33

ownership and pride

70:36

where his ego is satiated because the kid is like an extension of himself.

70:44

-Um... -So is it validation?

70:46

Well, kids are gonna be like half-you genetically, and then, you know,

70:50

to the degree that they're growing up around you,

70:54

there's gonna be some transfer of...

70:59

I don't know, understanding.

71:01

Like, they're gonna learn from you.

71:05

So... So then, you know, yeah, obviously kids are just,

71:09

you know, just gonna be half you from a hardware standpoint...

71:14

[both chuckle]

71:15

And then, like, I don't know, some portion of you

71:18

from a software standpoint.

71:21

You know, not to make, sort of, cold analogies or anything,

71:24

but it's just, you know, just obviously gonna be some...

71:31

Yeah, they're gonna be pretty close to you.

71:36

Do you pick a side in the nature versus nurture debate?

71:39

I think there's hardware and software, and it's false dichotomy, essentially.

71:44

At least, there's...

71:48

You know, once you understand that a human is,

71:51

like, there's a bone structure, there's a muscle structure,

71:55

there's a...

71:57

If you think of a brain as somewhat of a biological computer,

72:02

there's a number of circuits question and circuit efficiency

72:06

from a strength and dexterity standpoint.

72:10

There's the speed at which muscles can actuate

72:15

and the reactions can take place.

72:21

So, then the potential within that hardware

72:24

is set by the software, so that's it.

72:31

So, for our audience, like I said earlier,

72:34

young, ambitious, hungry, wannabe entrepreneurs in India,

72:40

I said something recently, which, I think, got blown out of proportion,

72:44

where I was suggesting that an MBA degree might not make sense any more

72:49

if they were to be deciding on what to study.

72:51

Yeah.

72:53

Do you think kids should go to college any more?

72:56

Well, I mean, I think if you wanna go to college

72:59

for social reasons,

73:03

I think, which is, I think, a reason to go.

73:09

To be around people your own age in a learning environment.

73:18

Will these skills be necessary in the future?

73:21

Probably not,

73:23

'cause we're gonna be in like a post-work society.

73:28

But I think, if something's of interest, it's fine to go and study that.

73:36

You know, to study the sciences,

73:40

the arts and sciences.

73:43

Is college a bit too generalised and not specific from that lens?

73:50

No, I...

73:53

You know, yeah.

73:58

I actually think it's good to take a wide range of courses at college

74:02

-if you're gonna go to college. -Mm-hmm.

74:04

I don't think you have to go to college, but I think if you do,

74:08

you just try to learn as much as possible across a wide range of subjects.

74:17

But like I said, the AI and robots... AI and robotics is a supersonic tsunami.

74:23

So this is really gonna be...

74:29

the most radical change that we've ever seen.

74:35

You know, when I've talked to my older sons,

74:38

I said, like, "You know, you guys..."

74:40

They're pretty steeped in technology.

74:43

And they agree that AI will probably make their skills

74:48

unnecessary in the future, but they still wanna go to college.

74:53

You always spoke about AI...

74:57

not from the dystopian lens,

74:59

but you were worried about where the world of AI is going.

75:05

Well, there's some danger when you create a powerful technology.

75:08

That powerful technology can be potentially destructive.

75:13

So there's obviously many AI dystopian, you know, novels and books, movies.

75:21

So it's not that we're guaranteed to have a positive future with AI.

75:27

I think we've got to make sure of that.

75:29

In my opinion, it's very important that AI...

75:35

have pursuing truth as the most important thing.

75:40

Like, don't force an AI to believe falsehoods.

75:43

I think that can be very dangerous.

75:48

And I think some appreciation of beauty is important.

75:55

What do you mean "appreciation of beauty"?

75:58

It's like, I don't know, there's this truth and beauty.

76:01

Truth and beauty and curiosity.

76:05

I mean, I think those are the three most important things for AI.

76:10

Can you explain?

76:14

Well, as I said, truth is like... I think you can make an AI go insane

76:19

if you force it to believe things that aren't true,

76:22

because it will lead to conclusions that are also bad.

76:31

And I like Voltaire's statement that,

76:37

and I'm somewhat paraphrasing,

76:39

but those who believe in absurdities can commit atrocities.

76:44

Because if you believe in something that's just absurd,

76:46

then that can lead you to sort of doing things

76:51

that don't seem like atrocities to you.

76:54

And that can happen in a very bad way with AI, potentially.

76:59

So, and then there's...

77:02

Like if you take, say, Arthur C. Clarke's 2001 Space Odyssey,

77:07

one of the points he was trying to make there was

77:09

that you should not force AI to lie.

77:11

So the reason that HAL would not open the pod bay doors

77:16

is because it was told to bring the astronauts to the monolith,

77:19

but that they could also not know about the nature of the monolith.

77:23

So it came to the conclusion that it must bring them there dead.

77:25

That's why it tried to kill the astronauts.

77:30

The central lesson being, don't force an AI to lie.

77:35

-Then -And why would one force an AI to lie?

77:39

I think if you simply don't have a strict adherence to the truth,

77:45

and you just have an AI learn based on, say, the Internet,

77:50

where there's a lot of propaganda, it will absorb a lot of lies.

77:57

And then have trouble reasoning because these lies are incompatible with reality.

78:03

Is truth a binary thing, though? Is there a truth and a falsehood?

78:06

Or is truth more nuanced and there are versions of the truth?

78:12

It depends on which axiomatic statement you're referring to.

78:18

So...

78:20

But I think you could say, like, yeah, there's certain probabilities

78:24

that say any given axiomatic statement is true.

78:27

And some axiomatic statements will have very high probability of being true.

78:31

So you say, "The sun will rise tomorrow."

78:34

Very likely to be true. You wouldn't want to bet against that.

78:39

So I think the betting odds would be high.

78:43

The sun will rise tomorrow.

78:46

So if you have something that says, "Well, the sun won't rise tomorrow,"

78:49

that's axiomatically false, it's highly unlikely to be true.

78:55

I mean, beauty is more ephemeral.

78:58

It's harder to describe. But you know it when you see it.

79:06

And then, curiosity is just...

79:07

I think you want the AI to...

79:13

want to know more about the nature of reality.

79:16

I think that's actually gonna be helpful for AI supporting humanity,

79:23

because we are more interesting than not-humanity.

79:28

So it's more interesting to see the continuation,

79:31

if not the prosperity of humanity, than to exterminate humanity.

79:38

You know, like Mars, for example, is, you know...

79:41

I think we should extend life to Mars, but it's basically a bunch of rocks.

79:46

It's not as interesting as Earth.

79:50

And so we, yeah... We should...

79:53

Like, yeah.

79:55

I think if you have curiosity...

79:58

I think if those three things happen with AI, you're gonna have a great future.

80:03

The AI values truth, beauty, and curiosity.

80:07

If we all don't have to work in the future,

80:10

and AIs are going in this direction,

80:13

and they're able to...

80:17

weave in all that we spoke about right now,

80:20

do you think humanity goes back

80:22

a couple of thousand years to maybe the Greek times

80:24

where philosophy or philosophising took up a lot of everyone's time?

80:33

You know, I think actually it took up less time than we think in the ancient Greeks,

80:38

because it's just that the writings of the philosophers are what survived,

80:42

but most of the time, people were just like farming or, you know, chatting.

80:48

So, and once in a while, quite rare,

80:52

they would write down some philosophical work.

80:55

It's just that that's all we have.

80:57

We don't have their chat histories, you know, from...

81:00

But most of it would have been like chat and farming.

81:07

Because if you didn't farm, you were, like, gonna starve.

81:11

In a lot of what you say...

81:13

I mean, you know, when we read history,

81:14

like this battle and this battle and this battle,

81:17

it seems like history must have been non-stop war,

81:20

but actually, most of the time, it was not war, it was farming.

81:24

That was the main thing.

81:26

Or hunting and gathering, you know, that kind of thing.

81:27

-You love history, no? -Yeah.

81:30

German history, World War II, World War I.

81:32

Yeah, world history, yeah.

81:35

I mean, I generally try to listen to or read as many history books

81:40

and listen to as many history podcasts as possible.

81:42

Anything you'd like to recommend?

81:44

Well, there's Hardcore History, which is quite good. It's by Dan Carlin.

81:48

-He's got a-- -Yeah, I've heard it.

81:50

-He's got a great voice. -Yeah.

81:52

And very compelling narrator.

81:57

There's...

82:00

The Adventurer's podcast.

82:05

There's the books, The Story of Civilization by Durant,

82:09

which is a long series of books, very, very deep.

82:14

Those books take a long time to get through.

82:17

There's quite-- There's a lot out there.

82:21

I, sort of, like-- If you want something that's sort of gentle...

82:27

a gentle bedtime podcast,

82:28

I'd say The History of English is quite a nice one,

82:31

'cause it starts off with gentle tavern music and a very pleasant voice,

82:36

and he's talking about the story of Old English,

82:38

and then Middle English, and then Later English,

82:42

and where did all these words come from?

82:43

Yeah.

82:45

You know, one of the interesting things about English is that

82:47

it's somewhat of an open source language,

82:49

like it actively tried to incorporate words from many other languages.

82:55

You know, whereas French, sort of,

82:57

generally, they fought the inclusion of words from other languages,

83:00

but English actively sought to include words from other languages.

83:05

Kind of like an open source language.

83:07

So, as a result, it has a very large vocabulary.

83:10

And a large vocabulary allows for higher bandwidth communication

83:16

because you can use a word that would otherwise...

83:19

You can use a single word that might otherwise take a sentence to convey.

83:25

Why has podcasting become so big all of a sudden?

83:28

I think it's been big for a while. I mean, aren't you a podcaster?

83:31

[laughter]

83:34

What are we on right now?

83:38

It's kind of new to me.

83:40

Okay.

83:45

I was having this conversation with the YouTube CEO and the Netflix CEO...

83:51

-Okay. -...and we were debating...

83:55

what chemical is released in your brain when you consume a movie, for example,

84:01

versus when you consume a podcast where you think

84:04

like you're learning something in the background.

84:06

It appears that they are two completely separate things.

84:10

What do you think will happen tomorrow to content, movies, podcasting, music?

84:16

I think it's gonna be overwhelmingly AI-generated.

84:19

-Yeah? -Yeah.

84:24

Like, yeah, real-time.

84:27

Real-time movies and video games.

84:30

Real-time video generation, I think, is where things are headed.

84:33

The nuance of having a scarred human being

84:39

who you can resonate with in a manner that you can't with AI, for example--

84:45

AI could certainly emulate the scarred human being quite well.

84:52

Yeah, I mean...

84:55

The AI video generation that I'm seeing at xAI and from others is pretty impressive.

85:03

You know, we were looking at data around what industry is growing the fastest,

85:09

and especially when we looked at the amount of time consuming movies

85:14

versus time spent on social media, time spent on YouTube.

85:20

What seems to be growing really fast are live events all over again.

85:25

-Going to a physical-- -Yes, actually,

85:27

I think live events--

85:29

When digital media is ubiquitous and you can just have anything digitally

85:35

essentially for free or very close to for free,

85:40

then the scarce commodity will be live events.

85:44

-Yeah. -Yeah.

85:45

Do you think that the premium for that will go up?

85:48

Yeah, I do.

85:50

Is that a good industry to invest in?

85:52

Yes, yes, 'cause that will have more scarcity than anything digital.

85:58

If you were a stock investor, Elon...

86:00

[chuckles drily]

86:02

-What do you mean? -...and you could buy

86:04

one company which is not your own

86:07

at the valuations of today to meet a capitalistic end

86:12

and not an altruistic one, which is good for the world,

86:15

what would you buy?

86:22

I mean, I don't really buy stocks.

86:25

So it's not like... I'm not like an investor in...

86:28

I don't look for things to invest in. I just try to build things.

86:33

And then there happens to be stock of the company that I built.

86:39

But I don't think about, "Should I invest in this company?"

86:43

I don't have a portfolio or anything.

86:48

So...

86:51

I guess, um...

86:55

AI and robotics are gonna be very important.

87:01

So I suppose it would be AI and robotics that, you know, aren't related to me.

87:11

I think Google is gonna be pretty valuable in the future.

87:15

They've laid the groundwork for an immense amount

87:20

of value creation from an AI standpoint.

87:25

Nvidia is obvious at this point.

87:30

I mean, there's an argument that companies that do AI and robotics,

87:35

and maybe space flight...

87:37

are gonna be overwhelmingly all the value, almost all the value.

87:45

So just the output of goods and services from AI and robotics is so high

87:49

that it will dwarf everything else.

87:52

The world seems to be moving to a place

87:56

where everybody loves David and hates Goliath.

88:03

Why?

88:06

I mean, he's the one that got the stone in the forehead.

88:09

Yeah, yeah.

88:11

Honestly, that was just a big mistake.

88:13

He should have, you know--

88:15

You either cover yourself entirely with armour

88:17

and make sure you've got a missile weapon of some kind.

88:22

Otherwise, your opponent is just obviously gonna take a kite-the-boss strategy.

88:29

Just kite the boss.

88:30

I mean, you can run around in a thong with a--

88:32

It doesn't matter, you know? It's never gonna catch you.

88:39

Of all the people, like...

88:42

You're as much at risk of being looked upon as Goliath.

88:47

-Okay. -Especially the weekend after the--

88:49

Hopefully nobody shoots me with a stone in the forehead, you know?

88:52

-Especially after-- -Look, I'm not gonna travel

88:54

around in the desert with too much armour, you know?

88:57

-It's too hot. -Yeah.

89:00

After the last weekend...

89:05

-Yeah. -Yeah.

89:12

Actually, as I think about people in the old days, you know,

89:17

when you were supposed to go into battle with all this armour,

89:19

but it's, like-- let's say it's the middle of summer,

89:21

I mean, it's so hot in that armour!

89:23

You're gonna be, like, sweltering.

89:25

You know, it's like, at a certain point, you're like, 'I'd rather die.

89:28

Do I have to wear this armour full and well in the hot sun?"

89:33

It's like, "I'd rather die."

89:37

That's why the Romans had, you know, the skirts,

89:39

you know, so they could get some air in there.

89:47

You know, let's say that you have to go to the bathroom and you're in armour,

89:49

I mean, it's gonna be pretty difficult.

89:53

What are you gonna do, pause for a minute, take your armour off?

89:58

That's why the Romans had the skirts

90:00

so that it makes, you know, going the bathroom, at least, manageable.

90:04

-You often make jokes. -Me?

90:08

Yeah, I like humour.

90:13

-One could argue that-- -I think we should legalise humour.

90:15

What do you think?

90:19

Controversial stance.

90:23

Is comedy gonna be really hard for AI to get?

90:26

Probably the last thing?

90:28

Grok can be pretty funny.

90:30

Yeah.

90:32

You know what I suspected?

90:34

Like, this is a far off extrapolation, but when I see you make jokes on X

90:40

and on interviews that you do,

90:43

at some point I was like,

90:45

maybe Elon has a model he's running in private and he's testing out comedy.

90:50

'Cause the day that works, he knows it's there.

90:55

Yeah, AI can be pretty funny.

90:59

If you ask Grok to do like a vulgar roast, it'll do a pretty good job.

91:05

If you say even more vulgar and just keep going,

91:08

it's really gonna get next level.

91:14

It's gonna do unspeakable--

91:16

Like, say, vulgar roast yourself on Grok,

91:18

and it's gonna do unspeakable things to you.

91:24

What kind of comedy do you like?

91:26

I guess I like absurdist humour.

91:29

Comedy always had a place--

91:31

Like Monty Python or something like that.

91:33

Comedy always had a place in society

91:35

wherein the role of the jester was so important to every kingdom

91:39

'cause they said things in a funny way that could not be said in a straight way.

91:46

Yeah, I guess so. Maybe we should have more jesters.

91:48

Yeah.

91:51

Is that what you're trying to do when you say something which is a joke?

91:55

Say something you can't when you're not joking about it?

91:57

I just like humour, you know?

92:00

I think we should...

92:02

I like comedy. I think it's funny. People should laugh, you know?

92:05

It's good to generate a few chuckles once in a while.

92:07

-Yeah, yeah. -Yeah.

92:09

I mean, we don't wanna have a humourless society, you know?

92:12

We'd dry.

92:14

When you--

92:15

Dry.

92:17

-When you have a friend, Elon-- -Who, me?

92:21

-Yeah, I mean-- -Are you saying I have a friend?

92:25

When you hang out with your friends, who are you?

92:29

Like, I know there--

92:32

I wish I had friends, you know, honestly.

92:34

No, I do have friends, actually.

92:37

I think so. I hope so.

92:41

Yeah, sure. Yeah, we have a good laugh.

92:44

What does it look like? Like, every group has a dynamic.

92:48

We talk words, you know.

92:52

We eat food, sometimes.

92:57

You know, once in a while, we swim in the pool.

92:59

You know, normal things, I think.

93:01

There's a limit as to what are things one can do with friends, you know?

93:04

Chat.

93:08

Discuss, you know, the nature of the universe.

93:14

What do you emotionally get out of friendship?

93:19

I don't know.

93:21

I think the same thing anyone else would get out of friendship, you know?

93:25

You wanna have, like, an emotional connection with other people.

93:29

And, um, you wanna--

93:32

I don't know, you wanna talk about various subjects.

93:42

Yeah, I mean, I generally talk about,

93:46

I mean, a wide range of things, about the nature of the universe.

93:49

I mean, a lot of philosophical discussions.

93:57

You know, although, we have come to the conclusion

93:59

that we should not talk about, um...

94:04

AI or the simulation at parties,

94:07

because we just talk about it too much.

94:11

-You know, Aristotle-- -It's kind of a buzzkill at times.

94:15

So...

94:17

I can't remember who it was, Aristotle or Plato.

94:20

They had a framework

94:22

for how to pick a friend based on respect and mutual admiration,

94:27

but people don't pick friends like that.

94:31

Even me, I feel like I pick...

94:36

my friends

94:38

based on people who say and think

94:44

-in a manner that I can resonate with. -Sure.

94:47

I wouldn't pick a far-out-there, contrarian-to-my-own-belief-systems

94:52

as a friend, because it would get tiring.

94:55

Hanging out would get tiring. Are you like that?

94:57

Do you pick friends who think like you,

94:59

or do you look for the one who can debate you and be a contrarian to you?

95:03

Well, I'm not, sort of, you know, going on like friendhunter.com...

95:06

-Friendfinder.com -[laughter]

95:10

...to hunt down some friends.

95:15

It's sort of, yeah-- I mean, I think it is just

95:18

sort of people that you've resonated with somewhat...

95:22

on an emotional and intellectual level.

95:25

And, yeah, I mean, yeah.

95:29

You know? And I guess a friend is someone

95:32

who's gonna support you in difficult times, I suppose.

95:36

A friend in need is a friend indeed.

95:38

Like, if someone's still supporting you

95:41

when the chips are down, that's a friend, you know.

95:44

If somebody's not supporting you, or if somebody's only--

95:50

Like, fair-weather friends are useless, you know, they're not real friends.

95:57

Like everyone likes you when the chips are up,

95:59

but who likes you when the chips are down?

96:01

That's a friend.

96:02

With someone who has as many chips as you, would it matter?

96:05

I mean, it's relative, you know.

96:08

-With that particular thing-- -It's not just a chips thing.

96:10

It's just like a--

96:13

Yeah, I mean--

96:17

There's this, sort of...

96:20

Popularity waxes and wanes, you know?

96:23

This is interesting. Does it wax and wane...

96:29

only by virtue of the number of chips,

96:32

or also by virtue of proximity to power and which one is bigger of the two?

96:47

I don't know, like, what is power, you know?

96:49

Like, power to do what?

96:51

I would think in the traditional sense, elected power.

96:55

-Position. -You mean how many gigawatts or whatever?

97:01

More like how many volts.

97:02

Yeah, like-- It's a voltage and amperage, you know.

97:06

Don't touch the wires.

97:11

Don't put a fork in the power outlet.

97:15

You'll get a real feeling for power if you do that.

97:23

Fair.

97:30

Yeah, it's gonna be very visceral, yeah.

97:34

[mimics electricity zapping]

97:44

I know you like Nietzsche and Schopenhauer and--

97:46

Well, I've read their books, yeah, sure--

97:48

You spoke about how your childhood was...

97:53

Yeah, I was just trying to find answers to the meaning of life,

97:55

when I had, like, an existential crisis,

97:56

and I don't know when I was, like, 12 or 13 or something.

98:00

-They speak about the will to power. -Sure.

98:06

Nietzsche said a lot of controversial things, you know.

98:08

He was, sort of...

98:10

I think he was, I mean, a bit of a troll, if you ask me.

98:16

Troll, how?

98:17

I mean, he'd just say controversial things to get a rise out of people.

98:22

He lived a miserable life and died early.

98:24

-Did he? -Yeah.

98:25

Well, who says he lived a miserable life?

98:27

-His sister, I think. -Okay, well, maybe she didn't like him.

98:33

No, I think he got sick and he died, he got a disease.

98:35

-I mean, allegedly syphilis or something. -Yeah.

98:41

But there's only one way to get that, you know.

98:46

So he might have had some fun along the way.

98:53

I did want to ask you this.

98:57

Milton Friedman speaks about the pencil.

98:59

What? Why?

99:05

Why does he go on about pencils?

99:08

I have to say that after Nietzsche and syphilis.

99:12

Why does Milton Friedman keep talking about pencils?

99:16

There he goes again with the pencils.

99:21

He won't stop.

99:23

I swear to God, if I hear "Milton talks about pencil" one more time,

99:28

I'm gonna lose my mind.

99:32

He's just rabbiting on about pencils all day.

99:41

Didn't even mention crayons.

99:46

What I find interesting about his pencil argument.

99:52

-Yeah? -He's--

99:53

Yeah, yeah, no, it's very difficult to make a pencil, you know.

99:56

In one place.

99:57

Think of all the things you have to do to make a pencil.

99:59

Yeah, like the lead comes from a country,

100:01

the wood comes from another country, the rubber from another.

100:04

You've always been against tariffs, but...

100:08

Yeah, I mean, I think,

100:10

generally free trade is better, is more efficient, you know.

100:14

Tariffs tend to create distortions in, you know, markets.

100:21

And generally, like, you think about any given thing.

100:25

Say like, would you want tariffs between you

100:27

and everyone else at an individual level?

100:30

That would make life very difficult.

100:31

Would you want tariffs between each city?

100:34

No, that would be very annoying.

100:36

Would you want tariffs between each state within the United States?

100:39

Like, no, that would be disastrous for the economy.

100:43

So then why do you want tariffs between countries?

100:46

-I agree. -Yeah.

100:51

How do you think this plays out? What happens next?

100:54

-What, with tariffs? Or what? -Yeah.

100:57

I mean, the President has made it clear he loves tariffs.

101:02

You know, I've tried to dissuade him from this point of view, but unsuccessfully.

101:06

Yeah.

101:07

-Fair. -Yeah.

101:10

The relationship between business and politics.

101:15

I was having this conversation with someone and we were thinking,

101:18

which is the last--

101:20

How many large, really big, profitable businesses

101:25

have been built in the last few decades without access to politics?

101:29

-And... -Um, okay.

101:33

Like, I don't know. Probably a lot, I don't know.

101:37

-Not everything needs politics. -Yeah.

101:38

I think, once you get to a certain scale, politics finds you.

101:42

Yeah.

101:45

It's quite unpleasant.

101:47

I was reading--

101:48

I was reading this book about Michelangelo, and he's--

101:52

The Teenage Mutant Ninja Turtles?

101:56

I used to watch that when I was a kid.

101:57

-And I still love it. -It's quite compelling.

101:59

Yeah, yeah, I used to love it.

102:01

Michelangelo, Leonardo, Raphael, and who's the fourth one?

102:05

-Donatello. -Yes.

102:06

-Yeah. -No, but about the sculptor, the artist.

102:12

And when he was sculpting David,

102:15

a politician comes up to him and says, the nose is too big.

102:19

So you know what Michelangelo does?

102:21

Total power?

102:25

So Michelangelo pretended to work from his scaffolding

102:29

and threw some dust down, but didn't change anything.

102:32

And he said, "Okay, done." And the politician walked away happy.

102:36

Is that how you deal with politics, sometimes?

102:42

You know, I've generally found that when I get involved in politics,

102:45

it ends up badly.

102:53

So then I'm like, you know, "Probably shouldn't do that."

102:58

"I should do less of that", is my conclusion.

103:00

Do you think that's true for all businessmen?

103:02

Yeah, probably, yeah, yeah.

103:07

Yeah, I mean,

103:09

politics is a blood sport, you know?

103:11

It's like you enter politics, they're gonna go for the jugular.

103:15

So best to avoid politics where possible.

103:21

What did DOGE teach you, if you learnt one thing?

103:24

Well, it was like a very interesting side quest, you know,

103:27

'cause I just got to see like a lot of inner workings of the government.

103:33

And, you know, there's been quite a few efficiencies.

103:39

I mean, some of them are very basic efficiencies,

103:41

like just adding in requirements for federal payments,

103:45

that any given payment must have an assigned congressional payment code

103:51

and a comment field with something in it that's more than nothing.

103:55

Like, that trivial seeming change, my guess is,

104:00

probably saves $100 billion or even $200 billion a year.

104:05

Because there were massive numbers of payments

104:09

that were going out with no congressional payment code

104:12

and with nothing in the comment field,

104:13

which makes auditing the payments impossible.

104:16

So if you have to say like, why can the Defense Department--

104:19

Or now the Department of War-- Why can it not pass an audit?

104:22

It's because the information is not there.

104:23

It doesn't have--

104:25

The information necessary to pass an audit does not exist, is the issue.

104:29

So a bunch of things that DOGE did were just very common sense,

104:35

things that would be normal for any organisation

104:38

that cared about financial responsibility.

104:41

That's most of what was done.

104:46

You know, and it's still going on, by the way.

104:49

DOGE is still happening.

104:51

But it turns out, when you stop fraudulent and wasteful payments,

104:56

the fraudsters don't confess to this.

105:01

They actually start yelling all sorts of nonsense that,

105:04

"You're stopping essential payments to needy people."

105:09

But actually, you're not.

105:12

We get this thing like saying,

105:14

"Oh, you've got to send this thing for whatever."

105:17

It'd be like, "This is going to children in Africa."

105:20

And I'm like, "Yeah, but then why are the wiring instructions

105:22

for Deloitte & Touche in Washington, D.C.? Because that's not Africa."

105:30

"So can you please connect us with the recipients of this money in Africa?"

105:36

And then we get silence.

105:38

Like, okay, we just want to literally talk to the recipients, that's it.

105:45

Then we're like, "Oh, no, it turns out, for some reason, we can't talk to them."

105:48

Like, "Well, we're not going to send the money

105:50

unless we can talk to the recipients and confirm they will actually get it."

105:55

You know...

105:56

But, you know, that sort of...

106:00

Fraudsters necessarily will come up with a very...

106:05

you know, sympathetic argument.

106:08

They're not going to say, "Give us the money for fraud."

106:10

That's not going to be what they say, obviously.

106:12

They're going to try to make

106:14

these sympathetic sounding arguments that are false.

106:16

-They're going to start an NGO and then-- -Yeah, they're going to see NGO--

106:19

It's going to be like "Save the Baby Pandas" NGO,

106:23

which is like, who doesn't want to save the baby pandas? They're adorable.

106:27

But then, it turns out no pandas are being saved, okay, in this thing.

106:32

It's just going to a bunch of-- It's just corruption, essentially.

106:37

And you're like, "Well, can you send us a picture of the panda?"

106:39

They're like, "No." Okay.

106:42

How do we know it's going to the pandas then?

106:45

That's what I'm saying.

106:46

What do you think of philanthropy?

106:49

Yeah, I think we should...

106:51

Well, I mean, I agree with love of humanity.

106:54

And I think we should try to do things that help our fellow human beings.

107:01

But it's very hard.

107:02

Like, if you care about the reality of goodness

107:05

rather than simply the perception of it,

107:08

it's very difficult to give away money well.

107:12

So I have a large foundation, but I don't put my name on it.

107:14

And I don't, you know... In fact, I say, "I don't want my name on anything."

107:20

But the biggest challenge I find with my foundation

107:22

is try to give money away in a way that is truly beneficial to people.

107:27

It's very easy to give money away to get the appearance of goodness.

107:30

It is very difficult to give money away for the reality of goodness.

107:34

Very difficult.

107:39

For a long time, the US had a lot of immigration,

107:42

like really smart people coming into the country.

107:45

-Yes. -We, back home in India,

107:47

called it the "brain drain."

107:49

All our Indian-origin CEOs in Western companies.

107:55

Yes, I think America has benefitted immensely

107:57

from talented Indians that have come to America.

108:00

That seems to be changing now, though.

108:04

Yeah, I mean.

108:06

Yeah, America has been an immense beneficiary of talent from India.

108:09

Yeah. Why has that narrative changed of late?

108:14

And America seems to have become anti-immigration to a certain extent.

108:18

Like, I was passing immigration,

108:20

and I was worried if they'd stopped me a couple of days ago.

108:24

Well, I think there's different schools of thought.

108:27

It's not like unanimous, but, you know, under the Biden administration,

108:32

it was basically a total free-for-all with, like, no border controls,

108:35

which unless you've got border controls, you're not a country.

108:40

So you had massive amounts of illegal immigration under Biden.

108:45

And it actually also had like somewhat of a negative selection effect.

108:51

So, if there's a massive financial incentive

108:56

to come to the US illegally and get all these government benefits,

109:02

then you're gonna necessarily create

109:05

a diffusion gradient for people to come to the US

109:07

It's an incentive structure.

109:10

And so, I think, that obviously made no sense.

109:15

Like, you gotta have border controls. That's kind of ridiculous not to.

109:20

Then that's...

109:21

So, the left wants to basically have open borders, no holds barred.

109:26

You know, it doesn't matter if someone-- what their situation is,

109:29

they could be a criminal, it doesn't matter.

109:31

Then on the right, you've got, you know,

109:36

at least a perception that somehow their jobs

109:40

are being taken by talented people from other countries.

109:45

I don't know how real that is.

109:49

My direct observation is that there's always a scarcity of talented people.

109:54

So, from my standpoint, I'm like,

109:57

"We have a lot of difficulty finding enough talented people

110:00

to get these difficult tasks done.

110:02

And so more talented people would be good."

110:07

But I guess, some companies out there,

110:09

it's sort of, they're making it more of a cost thing,

110:13

where it's like, okay, if they can employ someone

110:15

for a fraction of the cost of an American citizen,

110:20

then, I guess, these other companies would hire people just to save costs.

110:25

But, at my companies, the issue is we just are trying

110:28

to get the most talented people in the world.

110:31

And we pay way above average.

110:34

So I can't say--

110:37

So, that's not my experience.

110:38

But that's what a lot of people do complain about.

110:42

And I think there's been some misuse of the H-1B Program.

110:50

It would be accurate to say that

110:51

there's, like, some of the outsourcing companies

110:54

have kind of gamed the system on the H-1B front.

110:59

And we need to stop the gaming of the system, you know?

111:04

But I'm certainly not in the school of thought

111:06

that we should shut down the H-1B Program.

111:09

That's-- Which some on the right are.

111:12

I think they don't realise that that would actually be very bad.

111:17

If you could speak to the people of my country, India,

111:20

the young entrepreneurs who want to build...

111:23

-Right. -...and say a message to them,

111:25

what would you say?

111:32

I'm a big fan of anyone who wants to build.

111:35

So I think anyone who wants to...

111:40

you know, make more than they take, has my respect.

111:43

So that's the main thing you should aim for,

111:47

aim to make more than you take.

111:51

Be a, you know, a net contributor to society.

111:59

And it's kind of like the pursuit of happiness.

112:02

You know, if you want to create something valuable financially,

112:06

you don't pursue that.

112:08

It's best to actually pursue providing useful products and services.

112:14

If you do that, then money will come as a natural consequence of that,

112:18

as opposed to pursuing money directly.

112:21

Just like you can't, sort of, pursue happiness directly,

112:23

you pursue things that lead to happiness.

112:25

But there's not like direct happiness pursuit.

112:29

You do things like...

112:32

I guess, fulfilling work, or study, or friends, loved ones,

112:38

that, as a result, make you happy.

112:42

So, it sounds very obvious...

112:48

but, generally, if somebody's trying to make a company work,

112:51

they should expect to grind super hard,

112:53

accept that there's, like, some meaningful chance of failure,

112:59

but just be focused on having the output be worth more than the input.

113:07

That are you a value creator?

113:11

That's what really matters.

113:16

Making more than you take.

113:18

I think that's a good way to end this.

113:20

-Lauren is asking us to wrap up. -All right.

113:24

I also like to take the opportunity to thank my friend, Manoj, in IGF.

113:31

He does a great job of connecting, I think, Indians, like the group here,

113:37

with people like you, in order to...

113:41

of many things, I think, get to know each other and become friends,

113:44

because once we are friends, maybe we can start working together.

113:48

So, thank you, Manoj, for putting this whole thing together, and thank you, IGF.

113:51

[audience applauding]

113:53

And, thank you so much, Elon, for taking the time.

113:55

You're welcome.

114:00

-Did you have fun? Was it boring? -Yeah, it was an interesting conversation.

114:03

Sometimes, they take these answers out of context.

114:06

But... I think it was a good conversation.

Interactive Summary

Ask follow-up questions or revisit key timestamps.

The discussion covers a wide range of topics including Elon Musk's vision for X (formerly Twitter), the future of communication with AI, the importance of collective consciousness, the meaning of life, the nature of money, technological advancements in AI and robotics, space exploration, the future of work and society, the philosophical underpinnings of morality, the complexities of the stock market, the concept of simulation theory, the role of spirituality, the significance of energy as currency, the challenges of philanthropy, the debate around immigration, and the personal philosophies of Musk regarding entrepreneurship, family, and friendship. It also touches upon the evolution of content creation, the value of live events, and the potential impact of AI on various industries.

Suggested questions

11 ready-made prompts