HomeVideos

Can Downgrading Your Tech Upgrade Your Results? | Cal Newport

Now Playing

Can Downgrading Your Tech Upgrade Your Results? | Cal Newport

Transcript

2442 segments

0:00

If I had to use one word to describe

0:03

modern digital tools, it would be fast.

0:07

Now, whether we're talking about

0:08

workplace communication or swarms of AI

0:11

agents or even apps for ordering food,

0:13

the focus always seems to be on reducing

0:16

friction and increasing the options

0:18

available to get people through whatever

0:20

task they're thinking about as quickly

0:23

as possible. Now, the speed of course

0:25

has a a negative side effect. It's

0:28

exhausting. It reduces so much of our

0:30

life to a a frantic blur of swipes and

0:33

taps and clicks all in a sort of

0:35

never-ending battle to keep up with a

0:37

ceaseless barrage of incoming

0:39

information. But what can we do? Going

0:42

faster has to make us more productive,

0:44

right? Or does it? You see, there's a

0:48

growing subculture of individuals who

0:49

are embracing simpler technologies that

0:52

offer less features and more friction.

0:56

And they're not doing this as a

0:57

political statement. And they're not

0:58

doing this because they're nostalgic,

1:00

but instead because they think these

1:01

more minimal tools actually makes them

1:03

better at their work and makes their

1:05

life more livable. I call this movement

1:09

slow technology, and it's what I want to

1:11

talk about today. Now, to help us better

1:13

understand it, we're going to be joined

1:14

by Amy Timberlake, who's an acclaimed

1:16

best-selling author of children's and

1:19

middle-grades books. Timberlake has won

1:21

countless distinctions, including a

1:23

Newberry Honor and an Edgar Award. Her

1:25

titles have been named to, as far as I

1:27

can tell, just about every best book of

1:29

the year list in existence. So, she

1:31

knows what she's talking about. Now,

1:32

Timberlake came to my attention

1:33

recently. Here's why I want to talk to

1:34

her about slow productivity. She came to

1:36

my attention when I discovered she had

1:39

recently shifted to using a mechanical

1:42

vintage typewriter

1:45

for more and more of her writing and

1:46

revision process. So, I really wanted to

1:49

find out what was going on here. So in

1:51

our conversation uh we talk about what

1:54

is it succeed what does it take to

1:55

succeed in that world of children's book

1:57

writing the details of her creative

1:59

process and how it's evolved and how she

2:01

came to believe that using a mechanical

2:03

typewriter was actually going to make

2:04

her a better writer and a happier human.

2:08

Spoiler alert I kind of end this

2:10

conversation about half serious about

2:12

buying a typewriter myself. So beware

2:13

about that. Um, and then after our

2:15

conversation, I'm going to step back and

2:17

I'm going to isolate some general

2:19

principles of slow technology. I'm going

2:21

to show you some other examples of slow

2:22

technology that have become popular in

2:24

recent years and make the argument that

2:26

many more people, not just those who are

2:28

in creative fields like writing, should

2:29

consider embracing simpler tools. So, if

2:32

you are tired of being told by, you

2:34

know, tech leaders like Sam Alman that

2:37

your future has to be orchestrating

2:39

armies of hyperactive AI agents, or if

2:41

you find your smartphone to be an

2:43

overstimulating anxiety machine, then

2:45

this episode is for you. As always, I'm

2:49

Pal Newport and this is Deep Questions,

2:53

the show for people seeking depth in a

2:55

distracted world. Today's question,

2:58

should I embrace slow technology? and

3:01

we'll explore some answers right after

3:03

the music.

3:11

All right, Amy, uh, thanks for joining

3:14

us today. We have so much geeking out on

3:17

creative process and technology and the

3:20

deep work to get into today. So, um, I

3:23

have been excited about this. Uh, I want

3:26

to start by just making sure that the

3:27

listeners understand your story and

3:30

where you're coming from. Um, first of

3:32

all, how do you describe even the genre

3:34

of books you write in? I I see them

3:36

described as children's book, but

3:37

there's there's a range here, right?

3:39

From, you know, books that I might think

3:41

of as like middle grades to illustrated

3:43

books. So, just as let's just set the

3:44

stage. How would you describe yourself

3:46

as a writer?

3:48

>> Uh, well, I would describe myself as a

3:50

writer who will write pretty much

3:53

anything. However, I have uh been

3:57

published in writing for kids and I've

4:01

written picture books.

4:04

Actually, I published one picture books

4:06

and then I've written middle-grade

4:07

novels, which is kind of the middle

4:10

range um of readers. And then YA would

4:13

be high would be older. So, I write

4:17

pretty much middle-rade novel age. And I

4:21

would say

4:22

that's

4:24

actually at this point it's probably

4:27

second grade through

4:30

8ighth grade, 7th grade. Um, and this

4:33

last and and this last group of novels

4:36

that I wrote, they are actually

4:38

readalouds. And the idea was that I was

4:41

trying to write something that somebody

4:44

could read in a room and everyone from

4:47

the 80 year olds to the five-year-old

4:50

would enjoy. And this was a big

4:52

challenge. I always take a challenge

4:53

every single time I take a project. And

4:56

uh yeah, and also I was trying to write

4:59

humor for the first time. I'm a quirky

5:01

writer, but I wanted to just try to be

5:05

funny in books. And

5:08

that was a big that was a big challenge

5:10

and and fun and I'm glad I tried it. I'm

5:13

really glad I tried it. So that's these

5:15

latest ones. The latest ones are the

5:17

skunken badger book. So skunken badger,

5:19

egg marks the spot and rock paper

5:21

incizers.

5:22

>> I mean so something I've always wondered

5:24

about that genre is that it's a genre

5:27

where people often they have this very

5:29

naive view of it. Right? When people

5:30

think about books they often think oh

5:32

the hard part about writing is like the

5:34

quantity of words. So like okay I get

5:35

it. I'm not going to be a big novelist

5:38

because I can't imagine writing that

5:39

many words. But then they think of a if

5:41

they're looking at like a picture book

5:42

or something, they're like, well, the

5:43

actual quantity of words there is not

5:44

that much. I could sit down and write

5:46

that many words and it would sound

5:48

roughly like a story. I could do that,

5:49

you know, like this afternoon.

5:51

>> It's a very competitive market, though.

5:54

>> What What did you discover that you then

5:57

put into that uh that first book that

5:59

like makes projects sell in children's

6:02

books because there's so many

6:04

submissions in that world? because so

6:05

many people try. What is it that people

6:08

don't understand about what a successful

6:10

book in that genre must do?

6:16

>> I think it just

6:18

well I can only speak for my experience

6:21

but I would say

6:25

the better written it is the the more

6:28

chance you have. And that first book,

6:32

I wrote it like my grandfather telling

6:36

the story. So that voice was very

6:39

specific. New Mexico

6:42

tall tale. And I just went for it. I

6:46

didn't I didn't I didn't do a lot of

6:48

things that

6:51

they might that you might read about and

6:54

hear that you have to do. I didn't um I

6:58

didn't worry about language. I didn't

7:00

worry about vocabulary. I worried about

7:02

how well can I tell this particular

7:05

story

7:06

um with this particular voice and I just

7:11

gave it everything I had. Um

7:16

so that's that's how I do it. Um it

7:20

means it means for me though that as a

7:23

writer I I'm not quick at what I do. It

7:28

means I produce a lot of words and then

7:30

I cut down. In particular for these

7:32

skunk and badger books,

7:36

they are really tightly written. They're

7:39

almost like a farce

7:41

and they can't carry a lot of extraneous

7:46

words. So I have to really know the

7:49

characters. I have to write pages and

7:52

pages and pages. And then I cut cut cut

7:55

cut cut cut cut cut cut cut cut cut cut

7:56

cut cut cut cut cut cut cut cut cut cut

7:56

cut. and the cutting just goes on

7:58

forever.

7:59

>> Yeah.

8:00

>> Um, so it's so for me it's and it's it

8:04

is really all about language and how

8:07

does the words sound and so you know I'm

8:09

often like when I write these I write

8:12

long then I stand up and I read them out

8:17

loud so that I can hear how the language

8:19

is hitting on the page and then I I cut

8:23

them again. So yeah. So I I can't really

8:27

honestly tell you what makes it

8:30

successful. I do think that the thing

8:32

that I I really do is I I I put 100% of

8:36

my effort in it.

8:37

>> Yeah.

8:38

>> So

8:39

>> I think you did just tell us. I mean

8:40

what what I'm hearing there is something

8:42

I've heard before which is in that type

8:45

of writing it's it's like a puzzle

8:47

coming together. the every word matters,

8:50

the the tonality, the way that the the

8:52

the rhythm and poetry of how it sounds

8:54

out loud. Nothing wasted,

8:57

>> everything moves it forward, which is

8:59

different than, I don't know, if I'm

9:00

Neil Stevenson writing a thousandpage,

9:03

whatever. Not every sentence needs to be

9:06

>> perfect, right? I'm like all over the

9:07

place, postmodern degression going on,

9:09

and it's fine. It's like I'm moving like

9:11

a plot. Yeah. But

9:12

>> yeah,

9:13

>> uh I've heard the same thing about

9:14

screenplays that people as with

9:16

children's books, people like, "Oh, I

9:17

could write a screenplay because I know

9:19

movies and if I look at a scene, I was

9:21

like, they're just talking. I can just

9:22

talk." And then if you talk to

9:23

professional screenwriters, like I've

9:24

interviewed them for the show, they're

9:26

like, "Oh, no, no, there's it's like a

9:28

jewelbox puzzle. Nothing can be wasted.

9:30

You can't have a single person say a

9:33

single thing that doesn't have a reason

9:34

why they're saying it. And it better

9:36

not, you know, it's got to be check

9:37

off's gone. If you mention this here,

9:39

this better come back here." And

9:40

actually, it's the nothing's wasted. So,

9:43

I I get that the sort of

9:45

>> um every sentence has to be right. But

9:47

that's interesting. You write a lot

9:49

>> so that you can then pull out as opposed

9:52

to

9:53

>> building the smaller number of sentences

9:55

very carefully just from scratch. You

9:57

you so you pull back to it. And is that

10:00

because it

10:01

>> it puts you in the having the larger

10:03

amount of text really puts you into the

10:05

moment in the character so that you can

10:07

then better find

10:09

>> the right line for that page. age. Is

10:11

that a way to think about it? Hey, let's

10:12

take a quick break to hear from some of

10:15

our sponsors.

10:16

Hey men, you need to start taking better

10:19

care of your skin. What's the best way

10:22

to do this? With Calira Lab. Calira Lab

10:25

makes high performance skin care

10:27

designed specifically for men's skin.

10:28

See, men's skin is 25% thicker, oiler,

10:31

and aged differently from women's, which

10:34

means men need clean formulas engineered

10:37

for their biology. Now, Caldera Lab's

10:40

three-step regimen is powered by clean,

10:42

clinically tested ingredients and

10:44

breakthrough patent pending technologies

10:46

that deliver visible results. Step one

10:48

is to use their cleanser, which will

10:50

clear dirt, oil, and sweat that is built

10:52

up on your skin. Then, step two, you use

10:54

their serum, which is clinically proven

10:57

to reduce wrinkles, firm skin, and

10:58

improve elasticity. It also can deal

11:00

with puffiness under your eyes. Trust

11:02

me, man, as you get to a certain age,

11:03

that becomes relevant. And step three is

11:05

use their moisture mo moisturizer which

11:07

is lightweight and non- greasy. Now I

11:10

really hadn't done much with my skin

11:11

until I got these three products from

11:13

Calerolab and I didn't realize how much

11:16

I had been missing. So this is a small

11:19

habit that produces big results. Go to

11:22

calderab.com/deep

11:25

and use the code deep to get 20% off

11:28

your first order. This episode is also

11:32

sponsored by Better Help. So, it's tax

11:35

season, which means finances are on our

11:38

minds. And this matters for our mental

11:39

health because financial stress can take

11:42

a serious toll on your mental health.

11:46

Money worries can bring anxiety and

11:48

sleep disruption and depression. It's

11:50

also probably one of the leading sources

11:52

of conflict among couples. Now, given

11:54

that 88% of Americans are feeling at

11:57

least some sort of financial stress,

11:59

according to an early 2026 survey, this

12:01

is a problem that we cannot ignore. One

12:03

part of the solution, of course, would

12:05

be getting better financial advice. But

12:06

another part might be getting the proper

12:08

support for your mental health. And this

12:11

is where Better Help enters the scene.

12:14

If you're considering therapy,

12:15

BetterHelp makes it easy. With over

12:18

30,000 therapists, BetterHelp is the

12:20

world's largest online therapy platform,

12:22

having served over 6 million people

12:24

globally. And it works with an average

12:26

rating of 4.9 out of five for a live

12:28

session based on over 1.7 million client

12:32

reviews.

12:33

So, when life feels overwhelming,

12:35

therapy can help. Sign up and get 10%

12:38

off at betterhelp.com/deepquests.

12:41

That's betterhp.com/deepquests.

12:46

All right, let's get back to the show.

12:49

>> Yeah. So, I think I think the reason why

12:51

I go long is for a couple of reasons.

12:55

Sometimes it's because I need to

12:58

discover who the character is, and the

13:00

characters are always very specific, you

13:02

know? So,

13:04

I don't know. Have you have you seen

13:06

Sally Wayright's stuff on BBC like Riot

13:10

Women or, you know, any of that?

13:12

>> I'm familiar with it. Not I I know the

13:14

the titles but I'm not very familiar

13:16

with it. She is just she's just exactly

13:19

to me exactly the kind of writer that

13:22

you were just talking about which is

13:24

where every

13:26

every piece of dialogue that she has in

13:29

one of those those um miniseries BBC

13:34

shows um is always just thought through

13:39

and so and I would imagine that she also

13:43

writes long. she has to figure out who

13:47

those characters are exactly and and

13:50

sometimes she's going back and placing

13:52

dialogue early because it makes sense

13:54

because this, you know, it's all just

13:57

figuring out where things land. Um, but

14:00

I do it I do it because it's a lot about

14:04

characters and finding those scenes. And

14:06

then also sometimes if you write long

14:11

and you really love language like I do,

14:15

you find that sentence that is just it's

14:18

perfect. It's dynamite. And you go,

14:22

okay, that's it. That's the voice.

14:24

That's what I need. Now I have to write

14:26

this whole thing again because I have to

14:28

have that kind of like whatever that is.

14:31

It's like a music. It's like a sound and

14:34

you get really excited about it.

14:36

>> Yeah.

14:36

>> Um so anyway, so that's why I do it.

14:39

>> How does that change when you're writing

14:41

the slightly older grades books that are

14:43

chapter format, more text? Um how does

14:45

that adapt to that?

14:46

>> Oh, that's that's when I'm really doing

14:48

it.

14:49

>> Interesting. I mean, yeah. All of these

14:51

all of these skunk and badger books are

14:53

really like Okay. All right. Here's

14:56

Skunk and Badger. The first line is of

15:00

this one is, "The first time Badger saw

15:04

skunk, he thought puny and shut the

15:07

front door." Took me a while to find out

15:10

that he would use the word puny and that

15:14

that is the first time, you know, that

15:16

he he shut the front door, but that's

15:18

the beginning. And so I just had to I

15:20

had to write right write until I found

15:22

that thing and I was like all right

15:24

that's how it is. That's that's what

15:26

this is.

15:26

>> Had you written like dialogue a lot of

15:28

dialogue and then sort of found in that

15:31

dialogue, oh this is the this voice

15:33

feels right for the character and then

15:34

you knew how to go back and write that

15:36

first word in that first line.

15:37

>> Yeah. Yeah. I had to find out and I had

15:39

to find out who Badger was. And uh

15:42

Badger is a is a important rock

15:46

scientist. He does important rock work

15:49

and he's always actually it's funny it's

15:51

funny that I'm talking to you about this

15:53

because Badger Badger always kind of the

15:58

way he does his important rock work he

16:00

does his important rock work is he's

16:01

always walking somewhere and he's

16:03

thinking I must focus focus focus and so

16:06

he's always saying focus focus focus

16:07

>> I like this animal all right here we go

16:09

I like this character

16:10

>> yeah and he's you know he's kind of he's

16:14

he's very happy in his life he lives in

16:16

a brownstone. He does his important rock

16:19

work. He in the first book he's um you

16:22

know he's got his he's got his rocks.

16:24

He's got his magnifying glass. He's like

16:26

is this a rock or is this a mineral?

16:29

And then one day there's a skunk that

16:32

walks in, knocks skunk knocks on the

16:34

door, and now Badger has this roommate.

16:37

So, he's perfectly fine the way he is,

16:42

but he doesn't really have a choice

16:44

about Skunk being his roommate, and

16:46

Skunk is very different. And so, anyway,

16:50

finding those two characters

16:52

um took a while. And then the story is

16:57

actually told in um

17:00

a limited omnisient third person, third

17:04

animal voice because these are all

17:06

animals and sweaters. Thank you to John

17:09

Clawson and his beautiful sweaters. So

17:13

um and they are all told through badger.

17:18

I skunk is a bit of a mystery, but he

17:21

isn't to me, but he is to the reader.

17:25

So, yeah. So, I had to discover what is

17:29

the voice of this particular story. It's

17:32

going to be Badger

17:36

and he's he does this important rock

17:38

work and

17:40

um he he has a very serious life. It's a

17:43

very good life. It is, you know, it is

17:46

fine. He's not hurting anyone as he is.

17:48

He could live that way for a very long

17:50

time and I would be happy for him. But I

17:53

think he's a little better with skunk in

17:55

his life, but that's just me as the

17:57

writer.

17:57

>> So even to find out that perspective,

18:00

that's that was that came out of the

18:01

writing as well. Are we going to do

18:02

limited third person? That's like one of

18:04

the things you discover. Yeah.

18:06

>> Yeah. instead of first person. Sometimes

18:09

just having all these choices I I mean

18:12

when you're just you're starting with a

18:14

blank page, there's so much choice and

18:17

sometimes it's just too much choice. Oh

18:20

my gosh. So it just takes a while to

18:23

find all that. And so the writing is

18:25

just that's the way I have to discover

18:28

that. And then once I know kind of how

18:30

it's going to go, then my process is

18:35

it's the first scene that kind of sets

18:39

how far I can write into the story. How

18:42

deep have I gone? How much do I know

18:45

about this world? How much do I know

18:47

about these characters? And then I just

18:49

write until the story kind of waines and

18:52

it disappears. And then I come back. And

18:56

then I come back and I go, "Oh, I don't

18:58

know enough yet." And then I'll work

19:00

some more to kind of build up the

19:02

beginning so I have more fuel to get to

19:04

the end of the story. That's how I sort

19:05

of think about it.

19:07

>> Now, this is fascinating. And now I want

19:08

to unpack some of the actual uh even

19:10

physical rituals around this, but I

19:12

guess we should clarify.

19:14

>> Did you at some point along this way?

19:15

Are you do you write essentially

19:17

full-time right now or

19:18

>> I do now.

19:19

>> How did that transition happen? And and

19:21

psychologically, what was that like?

19:25

Uh, well,

19:28

okay.

19:30

I got married. There was health

19:32

insurance.

19:34

Yay. And my husband um my husband is a

19:41

well actually he just retired so he was

19:44

a professor in a theater department.

19:49

So we moved out to Decal, Illinois and I

19:54

we moved from Richmond where I had a

19:57

job. I had a regular job and you know I

19:59

was making money and I had anyway but we

20:02

moved to Decal Illinois and suddenly we

20:06

got out there and Phil had a job as a

20:08

professor and I did not know what I was

20:11

going to do for work. I mean I I was

20:15

looking around. I was like, well, I

20:17

could work in the university maybe

20:19

somewhere.

20:20

I don't I have a lot of administrative

20:23

skills and I write. Um,

20:27

but I anyway, I was looking at

20:29

detassling corn jobs. I was like looking

20:33

at like I was like, "Huh, what am I

20:35

going to do?" And and Phil just said,

20:37

"Why don't you just let let's see how

20:40

this goes, you know? I'll take

20:44

I'll I'll I'll

20:46

get you know I have health insurance.

20:48

Let's just see how it is if you write.

20:50

And so that is when I started writing

20:55

all by you know and it was a big

20:58

transition. It was a dream come true. It

21:01

was a dream come true and it was a big

21:03

transition. Um

21:07

yeah. What was it like? Well, it was

21:09

weird being in charge of my time. I was

21:13

completely in charge of my time. Um,

21:17

and I had a little tiny office and I

21:20

would go in there and I would sit and I

21:22

would I had What was I working on at the

21:25

time? Oh, you know, it was some sort of

21:29

it was some sort of laptop. Um,

21:33

anyway, yeah, it was um it was a big

21:36

transition. It was weird and it was

21:39

great. So, it was all of those things.

21:42

>> How did it impact the writing? I mean,

21:44

was it were you writing faster or more

21:48

or better or was it just less clutter in

21:50

your mind? What was the delta between

21:53

Richmond and the cal writing style? Hm.

21:57

Well, in Richmond,

21:59

um,

22:03

I was often I would often come home from

22:05

my job and be very tired.

22:08

So, and I was doing writing. I was doing

22:10

um I was working at the Virginia

22:13

Commission for Fine Arts

22:16

>> there. And so, I was doing their

22:18

website. It was pretty basic website.

22:20

And I was doing some writing. And so I

22:22

would write during the day, then I would

22:24

come home and I would be tired. And so

22:27

it was really hard for me to even feel

22:29

creative. Like

22:31

>> I would just be sort of drained by the

22:33

time I got home. Um

22:36

so

22:38

I wasn't getting as much done. I you

22:40

know I was also trying to break into

22:43

book reviewing at that time. So I

22:48

I would write these columns anyway. So

22:52

yeah, so it wasn't a lot. I wasn't

22:54

getting a lot of lot done. Um I did have

22:58

an agent interested in my writing at

23:01

that point,

23:03

so I knew I had to get some stuff I had

23:05

to get stuff done. Um

23:09

and then I was just trying and it was

23:12

hard and I was tired.

23:14

>> Yeah. And then when I got to decal, I

23:17

had more time. And I think the hardest

23:20

part was discovering that my ideas

23:24

were complicated. And it was going to

23:27

take a lot of time for me to even get

23:30

those complicated ideas into

23:34

a novel, especially a novel for kids. I

23:38

was, you know,

23:41

I think my first novel, um, it ended up

23:44

being called That Girl, Lucy Moon, but

23:47

it was about a 12-year-old activist. And

23:49

I really wanted like three generations

23:52

of women in this book. Like, I wanted

23:55

the 12-year-old activist. I wanted this

23:57

woman who owns the business in town. I

24:00

wanted her in it, and I wanted the

24:02

mother. And instead, I had to really,

24:05

you know, I had to learn the business.

24:07

And so not only I was just writing way

24:09

too much. I was wr writing huge

24:11

complicated ideas and it was

24:16

it was kind of frustrating because I was

24:18

here I was with this I suddenly had the

24:21

time to really write

24:24

and

24:26

it turned out I didn't really have the

24:29

skill to know what the structure was

24:31

yet. So it was just

24:34

it was a little frustrating. And you

24:36

know, there's my husband. He's going off

24:37

to work. He's doing his thing. And how's

24:40

your day, Amy? And I'm like, "Oh, I

24:44

don't even know if I'm a writer."

24:47

>> Interesting. So, in retrospect, really,

24:49

the the beginning of your time uh as a

24:52

full-time writer was actually more of a

24:54

like self uh guided training education

24:57

process. They're like, "Okay, now I have

25:00

time

25:01

>> to actually

25:03

>> push my skills in this genre to the next

25:05

level." And you were in that frustrated

25:07

learning process. You thought like right

25:09

off the bat I'm going to be spitting out

25:10

chapters and that was not what happened.

25:12

>> I really thought it was going to be

25:14

easier than it was. I thought my idea

25:16

was so great. I was like, "It's just

25:18

going to go." And instead,

25:21

um, I did end up I did end up selling

25:24

it. And in in kids lit you can often get

25:27

an editor that will help you you know

25:30

like they they will really you know and

25:32

so you know she says to me we can't have

25:36

three women in this book and this is for

25:40

kids right you know do we really want

25:43

like a mom and you know and I was like

25:45

oh

25:47

>> okay

25:48

>> you're you're right

25:50

>> yeah so but you know I I saw saw the I

25:54

saw, you know, I saw the point of what

25:57

she was saying and I was like, "All

25:58

right, well, you know, and those

26:01

editors, those New York editors, they

26:04

know their stuff."

26:05

>> Yes.

26:06

>> They are good readers. And so a lot of

26:08

times if you have a good editor and

26:10

they're like, they're saying, you know,

26:13

I'm just not, you're like, I know, I

26:16

know, I know.

26:19

>> Yeah.

26:21

>> Yeah. You're right. I've always heard

26:23

that's the that's the difference between

26:24

like a new writer and experienced writer

26:26

is the new writer fights a lot for the

26:28

things the editor is noting and the

26:30

experienced writer is like yeah I'm sure

26:32

you're right be and and you you get

26:35

rolling with it.

26:36

>> Um so how does that differ from today?

26:38

So now let's talk I'm I'm curious and

26:40

like how you would approach a novel you

26:42

know today now you have all this

26:43

experience under your belt like for

26:45

example are you still from the excavator

26:48

Stephen King camp or are you from the

26:50

outlining camp? uh do you have a

26:52

completely different process for

26:54

beginning to conceptualize the idea

26:55

before you start writing? What's your

26:57

now like sort of expert process you

26:58

would deploy today?

27:02

>> Well, I mean I think the first thing is

27:04

I have to choose what idea I'm going to

27:06

work on. I usually have a lot of ideas.

27:09

So that's not really a problem. But then

27:11

I have to choose which one I'm going to

27:14

work on. I think that is kind of

27:16

different I I than when it than earlier

27:19

in my process was that I would kind of

27:25

I would you know I would kind of I would

27:28

kind of just power through something and

27:31

now I'm kind of choosing. I'm like okay

27:34

I think this is the one that I want to

27:35

work on. Um, so I choose it and then and

27:40

then I start I just start writing

27:43

wherever

27:46

it's catching me. You know, wherever the

27:49

story is, like the thing that's really

27:51

alive in my head, that's what I'll start

27:53

writing with and I'll see where that

27:55

goes. And then at that point once I

27:58

start producing pages and it's not it

28:02

might not be that well I can pretty much

28:04

guarantee it will not be that good but I

28:07

you know I will be producing pages and

28:10

then at that point I'll like look at it

28:12

and I'll say okay what's the structure

28:15

you know what am I thinking about in

28:17

terms of structure and I will start

28:20

using tools like outlining at that

28:22

point. So, I use pretty much any tool

28:25

you have heard about, used, I have used.

28:28

>> I have stuck postits on doors. I mean,

28:32

that was a big thing for a while. Um, I,

28:35

you know, I spread stuff all over the

28:37

floor back here and I crawl around and

28:40

grab things and or, you know, or I've

28:44

used Scriber, which is a which is a

28:47

writing thing that people like online.

28:49

Um, I've yeah, I've just done pretty

28:52

much everything. But basically, my rule

28:55

is when I'm working on something, if I

28:57

have an inkling somewhere in my head

28:59

that I think this might help me, I grab

29:03

it and I give it a try.

29:06

>> What's your schedule now? Writing

29:08

schedule. How what's ideal for you?

29:11

>> Um, mornings.

29:12

>> Yeah,

29:12

>> working mornings. I'm I if I can work

29:15

mornings, that's the best. Um, I

29:19

I do need like to like I do need to

29:22

stretch. I do need to go for walks. I

29:25

have to get outside. I can't just I

29:28

can't just hole up and

29:31

crank, but I do usually try to, you

29:34

know, put in several hours at least. And

29:37

actually, your time block schedule has

29:40

really helped me. So, thank you. And I

29:42

have like I have resisted doing that for

29:45

so long. like the idea because by the

29:47

time I'm done working the last thing I

29:49

want to do is look at my day ahead.

29:51

>> Yeah.

29:51

>> So anyway, so I've started doing that

29:54

and that's actually been really helpful.

29:55

I'm always

29:56

>> I'm always trying I'm always

29:58

experimenting on myself to see is this

30:02

helpful, is that helpful? And so I'm

30:04

always I feel like I am I'm just a

30:06

constant experiment in my work process

30:09

like how I do it. But oh anyway, I'll

30:13

wait for your question. Well, where do

30:15

you write? What type of space gets your

30:17

creativity going?

30:19

>> Oh, I I like having an office.

30:22

>> Yeah. At your house. At your house or

30:25

somewhere else?

30:25

>> That's where I am right now. We're in my

30:28

office and uh this is one of the nicest

30:31

ones I've ever had. It's quite spacious.

30:34

Um, but I like having I like having an

30:38

office and I I um

30:42

I, you know, I just come up here and

30:46

yeah, it's it's just a space that's

30:49

mine. I It's a big deal,

30:52

I think, particularly for women. I know

30:55

Virginia Wolf writes about having a room

30:57

of one's own, but it really was a big

31:00

deal for me to just sort of claim a

31:03

space in our home and say, "This isn't

31:07

for anything else. This is just my

31:10

office." And I Yeah, I really like

31:13

having an office.

31:16

>> Yeah, we we underestimate the power of

31:18

space. Uh especially in like the world

31:21

of corporate work, too. I've long argued

31:23

about this is like, hey, we'll just

31:24

throw everyone in like some big open

31:26

space and this is efficient because we

31:27

don't have to have desks. And it's like,

31:28

no, human brains respond to environment.

31:32

So, um I I've heard the same thing from

31:34

lots of people. Having a space of your

31:36

own, customizing it, and also just

31:38

recognizing it. When I come here, it's

31:40

to work.

31:41

>> Yeah. All that can make a big

31:43

difference.

31:43

>> Um All right. So the the thing that

31:46

originally caught my attention, it's

31:48

what I want to get to now is this

31:50

experiment that you've been running more

31:51

recently where uh you've made the the

31:55

technical process of writing strictly

31:57

harder by moving back to an actual

32:00

typewriter as the mechanism with which

32:02

you're producing words. Tell me about

32:04

what you're doing and why.

32:07

Um, okay. Well, yeah.

32:12

So, the reason the typewriter came about

32:15

probably a little bit because Tom H Tom

32:17

Hanks was talking about it. I'm sure

32:19

you've heard Tom Hanks

32:21

>> had this thing. So, it was kind of in

32:23

the it was kind of in the air that

32:25

people were using typewriters again. Um,

32:29

but I was working on these three books,

32:32

Skunkan Badger, Eggmark Sispot, Rock

32:34

Paper, Incizers,

32:37

and

32:39

I had a um, and I was under a deadline

32:43

for it, and I was late on the third

32:46

book. So, I was really feeling I was

32:49

really feeling pressure and I had a work

32:53

process that was working for me. And I

32:57

was which was

33:00

I would

33:02

use the computer, the word processing

33:05

program, Microsoft Word mostly. I would

33:09

print out a chapter. I would print it

33:11

out in paper. I would bring it to my to

33:14

my chair and I would sit there and I

33:16

would make the changes in pencil or pen

33:19

on this draft. And then when I was done

33:23

with it, I would come back to the

33:25

computer and I would type it in and I

33:27

would try to make myself not fix it

33:30

while I was while I was typing. So I

33:33

would turn on music in the back just so

33:35

that my brain would be thinking about

33:37

the music and all I was doing was typing

33:39

the words into Microsoft Word. So, I was

33:41

doing this process and I also had a

33:45

notebook and a three- ring binder and I

33:50

was using 3x5 cards. So, I and what I

33:52

had realized as as I kept doing this, I

33:55

realized I kept moving away from the

33:58

laptop to get my work done, I realized,

34:01

you know, every time I do that, I focus

34:03

so much better.

34:05

So I had already started doing that but

34:10

the process was working and I am so

34:14

protective of my process like if I have

34:17

something that's working I am not going

34:20

to mess with it. If I have a habit

34:22

that's part of that or you know a

34:24

routine I will just keep that routine.

34:27

So, I was I was just keeping this

34:29

routine with the three, you know, the

34:31

printing out and the three ring the

34:34

three- ring binder and the 3x5 cards and

34:36

the notebooks and I'm using all this

34:38

stuff and I just kept thinking, I hate

34:42

this laptop. I hate it. I mean, I I

34:46

mean, I love my laptop. Let's face it.

34:48

The reason I have trouble with it is

34:50

because I also love it.

34:52

>> But, it's a really cool thing. But, you

34:55

know, every time an up an update would

34:58

come, something would change and then

35:00

there would be a notification and I'd be

35:01

like, I know I can turn this off. I

35:04

mean, I would spend hours working on

35:06

work focuses and home focuses and all

35:10

this stuff to try to get this laptop to

35:12

not interrupt me. And also, I love

35:15

email, so, you know, I would just check

35:17

email on a whim. So, I was like, oh my

35:20

gosh. So, I was really struggling with

35:22

that. And so I was spending more and

35:24

more time away from it. And I just

35:27

started thinking, you know what? It

35:29

would be I should really try a

35:32

typewriter.

35:34

And I was like, "Oh my gosh, that's

35:36

insane. That's insane. I can't try a

35:39

typewriter." But I was like, "Well, I I

35:41

think as soon as this project is done,

35:43

as soon as you send it off to your

35:45

editor, maybe you should just see if you

35:47

can get yourself a typewriter and try

35:49

it." Um, I

35:55

I had the only in my past my mom had an

36:00

electric Smith Corona

36:03

and uh it was in our living room. So the

36:07

only reason you really used that

36:10

typewriter was if you were,

36:14

you know, you'd written your paper and

36:16

you were typing it up and you better not

36:18

make one single spelling error on that

36:21

thing and meanwhile mom is right there

36:25

and sometimes she would like suggest

36:27

changes. So it was just like a stressful

36:30

thing.

36:30

>> Yeah.

36:31

Meanwhile, my dad downstairs was way

36:35

into computers.

36:37

So, he had like an Apple 2e. We had an

36:41

an a TRS80, which was a Radio. I think

36:44

that was a Radio Shack TRS80. We had a

36:48

TRS80, an Apple 2e at one point. And so

36:51

I ended up being the first person in my

36:55

high school who had

37:00

turned in a paper on a computer. And I

37:03

was a dot matrix printer. And uh I got

37:06

an A minus because my dad he stayed up

37:10

really late trying to figure out how to

37:12

get footnotes to raise on this um Apple

37:16

2e on their word processing and he

37:18

couldn't get it fit to work. So anyway,

37:20

so I feel like I sort of learned to

37:23

write on a computer. So the idea of

37:26

trying a typewriter was really crazy to

37:29

me because I just thought, well, how do

37:30

you revise? I don't get how you revise.

37:34

So now I am working

37:38

on my next project, which at this point

37:41

is another animal and sweater story

37:43

because I found that they're pretty fun

37:45

to write.

37:46

>> Yep.

37:46

>> Which is really Anyway, I really like

37:48

it. Um, so I'm working on it and I am

37:52

doing

37:54

the drafts on the typewriter as long as

37:58

I can. I know at some point I'm going to

38:00

have to get back on the laptop and I am

38:02

really fast at making changes on the

38:04

laptop. So I don't know how long I can

38:08

resist exactly,

38:10

but it's been really interesting. I

38:12

mean, there's been a couple of things

38:14

that have been really great. Um,

38:18

first of all, when I started writing on

38:20

a typewriter, I'm looking at my

38:22

typewriter. It's right over here. I

38:23

don't know. Here, I'll show you. I'll

38:25

give you

38:26

>> Oh, there it is. Oh, that's so that's

38:28

not even electric.

38:30

>> No,

38:31

I love Actually, you can you can get the

38:34

electric still, but you know, like

38:35

sometimes the um I'm always worried

38:38

about the electrical from previous

38:41

decades.

38:42

>> Yep. Is it old? Is that vintage? or is

38:45

there still people who are now kind of

38:46

retromanufacturing this technology?

38:49

>> Um, I think there are a few typewriter

38:53

manufacturers, but um I wouldn't

38:55

recommend ever buying one. They're

38:58

pretty they're pretty bad as at least

39:01

all the typewriter people say they are.

39:03

I haven't. Um, so this one is this one

39:06

is from 1960s

39:08

and um there are a few typewriter shops

39:11

still around. they put um the thing that

39:15

you want to do is you want to get one

39:17

that has new rubber.

39:18

>> Okay?

39:18

>> And that's the hard part to get. So

39:21

that's why you'd want to go to a

39:22

typewriter shop. And then also because

39:24

mailing

39:26

if you mail a typewriter there's a lot

39:28

of moving parts and

39:32

the all it takes is one FedEx guy to

39:36

clunk your clunk your typewriter really

39:39

heavy on some other box and suddenly you

39:42

have an issue. Yeah.

39:43

>> And it's it's hard to get them fixed. So

39:45

anyway Yeah. So this is um Anyway, so I

39:49

decided that I was going to try the

39:51

typewriter. I'm sorry I I got

39:53

distracted. What would you like to know?

39:56

>> Well, so do I have it right then? That

39:58

the process is you'll you'll type a

39:59

draft of like let's say a chapter on the

40:01

typewriter. Then you're marking it,

40:03

editing with pencil and paper like you

40:05

had done before and now you come back to

40:08

the typewriter blank sheet of paper and

40:11

now you're typing it. You type in the

40:13

revi the draft with your handmarked

40:15

revisions and then you'll take that the

40:18

revise again. Is that like the right so

40:19

like the typewriters for

40:21

>> like how do I have that right or what am

40:23

I missing?

40:23

>> I think that's about that's about it.

40:25

It's that's the part that I thought

40:28

that's the part that I thought was

40:29

insane. The fact that you would type the

40:32

whole thing again. Oh my gosh. But the

40:37

thing that's interesting about it is

40:39

that it makes you think through the

40:42

whole thing again. And so now it's in

40:45

your your head twice.

40:48

And

40:49

it does actually help in a weird way. I

40:53

mean, it actually helps. It's more for

40:56

me. I I'm I've never been someone that's

40:59

really good at memorizing or anything

41:01

like that, but repeating something is

41:04

really helpful for me. So, this is this

41:08

does that. It repeats it. And then the

41:10

other thing it does is it does not

41:13

interrupt you in any way. All it does is

41:18

puts word, you know, puts your words on

41:20

the paper. And I couldn't believe, I

41:23

mean, I think the first time I used a

41:26

typewriter, I was just trying it and I

41:28

got a writing exercise out. I started

41:31

doing it

41:33

and two hours goes by and I haven't even

41:37

thought. I was like, "Oh my gosh, how

41:40

did two hours just fly by?" I mean on my

41:45

laptop I would often just

41:48

it did not feel like that. It it did not

41:51

feel like I was just dropping I guess

41:53

they call it dropping down into the

41:55

well. I just felt like I just dropped

41:57

there and it was easy to to do it.

42:01

>> And um what it really at and and that

42:04

has been that has been the way it's felt

42:07

every single time you know I use this

42:10

machine. I think, oh, it's it's easier

42:12

for me to focus on this thing. And in

42:16

fact, it's made me feel that I was doing

42:20

a lot of work on my laptop that I did

42:22

not know I was doing. Like I was doing a

42:25

lot of work to maintain my focus that I

42:29

didn't know I was doing

42:32

because it was so much easier to work on

42:34

this thing. Now, the only problem is is

42:36

my writing has not I I can't tell that

42:40

my writing has gotten better. So, I

42:42

really honestly wish that I was suddenly

42:45

a better writer because I was using this

42:48

and

42:49

>> and I can't really tell that.

42:52

>> Is it is it faster like in the if you

42:55

zoom all the way out because

42:57

>> yeah,

42:57

>> less of your energy therefore less of

42:59

your time is fighting distraction. Do

43:01

you think like if you measured your

43:02

books are being produced a little bit

43:03

faster?

43:06

>> Maybe. Maybe. I mean, one of the things

43:08

that is interesting is that I can see my

43:11

process better.

43:14

I as I've been doing the drafts on this

43:17

machine on the typewriter,

43:20

I understand better what I am doing to

43:24

create that chapter. And I think that's

43:26

just got to be helpful to know that

43:30

um

43:32

so I realized that one of the things

43:34

that I do when I write a chapter is and

43:38

I think I was doing this on the laptop

43:40

too is it's almost like I'm you know how

43:43

how painters say they like a watercolor

43:45

painter will add like different levels

43:49

to you know they'll start with pencil

43:50

and then they'll add this and then

43:53

they'll add this And I am really seeing

43:56

that that is what I do. Like I I start

43:59

with this rough thing and then I'm like,

44:02

"Oh, what I really need is I need this

44:05

thing." And then I put that thing in.

44:07

And then I realized, "Oh, wait. I've

44:08

forgotten that." And then I put it in.

44:10

And somehow on the laptop,

44:13

I think I was doing like three or four

44:16

things,

44:18

you know, at the same time. And I

44:20

couldn't really tell you what my process

44:22

was. But in this with using this, I can

44:25

really tell what my process is and I go,

44:27

"Oh, this this will be helpful

44:29

information for future drafts." It's

44:32

knowing, "Oh, I like to like slot stuff

44:35

in and just keep working and then, you

44:38

know, get that. It's it's like creating

44:40

a painting but with words."

44:42

>> That's fascinating. Which Yeah. So it

44:44

tells me as as my guess would be with as

44:46

future projects go on with this process,

44:49

your rate at which you feel personal

44:51

improvements in parts of process and

44:53

craft will probably increase because now

44:55

you're making the process better

44:57

>> and now you can start thinking, okay,

45:00

well, if this is actually what my

45:01

process is, how do I make this second

45:03

step of the process

45:04

>> better? If this is what I'm looking for

45:06

there, then why don't I really like lean

45:08

into that piece and do, you know, I

45:09

could imagine

45:10

>> over the next few books that this

45:12

manualness is going to unlock

45:14

>> it's going to unlock more leaps and

45:17

polishes in the craft.

45:19

>> That's what I think too. I mean, I think

45:21

I think knowing more information is

45:24

good. It's actually good to hear you say

45:26

that because I think I think um I think

45:30

you probably think more about process

45:34

than I do. But I you know I'm always

45:36

trying to get my process to be better

45:38

and so hearing you say that you think

45:40

it'll help me is an encouragement to me

45:43

I guess.

45:44

>> So final final question. So I want to I

45:46

will do a little reflection on

45:47

productivity you know productivity and

45:49

technology and the brain is that's a you

45:52

know confluence of topics. of course I'm

45:54

I'm really interested in we're in a

45:56

moment now where we're hearing a lot

45:57

about writing and productivity because

45:59

of generative AI. It produces text. So

46:01

we're thinking a lot about writing

46:03

productivity and the the story that's

46:05

being told by the AI companies is what

46:08

we need to be more productive is words

46:11

have to be produced faster and we need

46:13

to reduce the cognitive bur should be

46:15

easier cognitively. We want it to to be

46:17

easier and be faster then we'll be more

46:19

productive. But when we hear from a

46:20

professional writer like you, a real

46:22

creative award-winning writer, the

46:25

productivity, which you probably think

46:26

about in the scale of like the books you

46:28

produce over the course of multiple

46:30

years and how good they are is

46:32

completely unrelated to the speed with

46:35

which you put words on the paper or

46:37

trying to reduce the cognitive strain.

46:39

You've actually gone the other

46:40

direction, made the words go on the

46:42

paper slower

46:44

>> to gain more big scale productivity. So,

46:48

so how do you think about in the

46:50

creative arts the idea of productivity

46:53

because I think it's very different than

46:55

the way that we're we've been hearing

46:56

recently?

46:59

>> Well,

47:00

number one, I always want to be more

47:03

productive. Like, oh my gosh, I would I

47:06

would love but I I don't

47:10

I just don't think the I don't think art

47:13

is efficient. That's my That's my number

47:17

one thing.

47:19

And

47:22

no, I mean, for me,

47:27

I just want to create the absolute best

47:32

book for kids that I can create in the

47:37

time that I am living in. So, I always

47:39

give myself a little out. Sometimes, you

47:41

know, you're living in a weird time

47:44

>> and you are just not going to be able to

47:46

do ex, you know, it's not going to be

47:48

perfect and nothing is going to be

47:49

perfect, but I always want to do my

47:52

absolute best. And for me, that means

48:00

it sounds good when you read it out

48:02

loud. It's um it has it it has language.

48:08

It has rhythm. It has um it has voice.

48:12

It's very specific to the characters.

48:15

And

48:18

I mean,

48:20

I don't know another way to do it except

48:22

for just taking time and doing it slow

48:25

and giving it

48:29

just giving it time. And I honestly I

48:32

always wish I were faster. I mean, you

48:34

could ask my husband any day. He will

48:36

say, "Amy, I've heard this before.

48:40

you wish you were faster. And I go, "Oh,

48:43

I wish I were faster." And then, but

48:45

there's no other way for me to create

48:47

what I feel good about

48:50

um except for taking the time.

48:54

>> I want to take another quick break to

48:55

hear from some of our sponsors.

48:58

Let's talk my body tutor, a 100% online

49:02

coaching program that solves the biggest

49:03

problem in health and fitness, lack of

49:06

consistency. Now, here's how it works.

49:08

When you sign up, you're assigned a

49:10

coach. And the coach helps you figure

49:11

out both a nutrition and an exercise

49:13

program custom fit to your goals and the

49:16

realities of your particular life. And

49:18

then here's the key part. You check in

49:21

with this coach every day using the My

49:24

Body Tutor app. You report what you ate

49:26

and what fitness activities you did or

49:28

any problems that you're having. Now,

49:29

this is why My Body Tutor works is this

49:31

accountability combined with customized

49:34

advice

49:36

helps you actually stick with your plan.

49:39

And if you get knocked off your schedule

49:40

because of like sickness or travel or

49:42

something like this, your coach is right

49:43

there to update plans. You don't abandon

49:45

it. Like, oh, you're going to be on the

49:46

road. Let's modify your workout for the

49:49

hotel. Oh, you're worried about eating

49:50

at Thanksgiving. Let's talk through what

49:54

your strategies are going to be. So, if

49:55

you want to get healthier, in my

49:56

opinion, this should be a big part of

49:58

the solution. So, head over to

50:00

mybodytutor.com to sign up today. If you

50:02

mention deep questions when you sign up,

50:04

you will get $50 off your first month.

50:07

That's mybody tutor, tu r.com, and

50:11

mention deep questions, you get $50 off

50:13

your first month. I also want to talk

50:15

about our friends at Grammarly.

50:17

Succeeding in knowledge work requires

50:19

more than deep thinking. It also

50:20

requires the ability to communicate your

50:22

ideas clearly.

50:24

Rushed, sloppy, or generic sounding text

50:27

just doesn't cut it. This is why you

50:30

need Grammarly. Here's the thing.

50:32

Grammarly doesn't just help fix mistakes

50:34

in your text. It integrates AI

50:36

technology seamlessly to help you write

50:38

better. One of these features I

50:40

especially appreciate is a tool's

50:42

ability to detect the tone of your

50:44

message and help you automatically

50:46

adjust it. Too formal, Grammarly can

50:49

make it sound more natural. too

50:50

conversational, Grammarly can help that

50:51

text sound more professional. And that's

50:54

just one feature among many. Not

50:56

surprisingly, 93% of users report

50:58

Grammarly helped them get more work

51:00

done. And here's the great thing. You

51:02

can use Grammarly in all the places

51:04

where you already write. It now works

51:05

across 500,000 sites and apps. In a

51:10

world of generic AI, don't sound like

51:11

everyone else. With Grammarly, you never

51:13

will. Download Grammarly for free at

51:16

grammarly.com.

51:18

That's grammarly.com.

51:19

All right, Jesse, let's get back to the

51:21

show.

51:23

All right, so there you go, Jesse. That

51:24

was my conversation with Amy Timberlake.

51:26

I I told you before we went on air that

51:28

she actually uh complimented your

51:30

steady, calming presence on the podcast.

51:33

>> You did.

51:34

>> It didn't make it through the final cut,

51:35

but she's she likes your voice. So,

51:39

her two productivity secrets are a

51:41

typewriter and Jesse.

51:44

I've thought about Here's what I've

51:46

thought about. I a I was tempted to buy

51:48

a typewriter because it just

51:50

mechanicalness of it going to a

51:52

typewriter store all that seemed kind of

51:53

romantic. Um it's kind of a pain I think

51:55

because you have to actually get ribbons

51:57

and they break easily. So but I've

51:58

thought about but seriously the kind of

52:00

digital equivalent is a tool like the

52:02

Alpha Write. I don't know if you've seen

52:03

these before but it's basically just

52:04

like a keyboard and a small screen and

52:07

all you do is just type a draft and you

52:10

can go back and edit like what's right

52:12

there but it's you're not copying and

52:13

pasting. You can't see the whole

52:14

document at a time. So you really just

52:16

kind of like writing what's in front of

52:17

you. So it's sort of like using a

52:18

typewriter, but you can kind of correct

52:20

typos that are right there. Then when

52:22

you're done, you can export from the

52:24

Alpharite to a regular computer. But

52:26

what makes it cool is they wanted to

52:27

futureproof it. And maybe I mentioned

52:29

this before on the show, but instead of

52:30

having a special software on your

52:32

computer that talks to an Alpharite, it

52:35

pretends to be a keyboard. So when you

52:37

plug it into your computer, your

52:39

computer recognizes it as a keyboard,

52:41

which is a very standard simple

52:43

protocol. And then when you press send,

52:45

it basically simulates someone typing

52:47

whatever you've written

52:49

>> really fast. So that you can load any

52:51

program you want, Microsoft Word,

52:52

Scrier, whatever. And then you press

52:54

send and it just starts going across

52:56

your screen really fast like a ghost is

52:59

actually typing what you just typed,

53:00

which I thought was kind of cool. Um,

53:01

I've held off on it so far because I

53:03

guess my style on fiction writing, I

53:04

guess I do, it's so structural that I'm

53:07

constantly moving and I feel the flow of

53:10

a sentence, I go back and change it

53:11

again. But but it was tempting.

53:13

>> When I took typing in high school was on

53:14

a typewriter.

53:16

>> Really?

53:16

>> Yeah.

53:17

>> How old are you? Actually, you're my

53:18

age. I was going to say, how old are

53:19

you? I had advanced technology. We used

53:22

Mavis Beacon typing teacher on Apple

53:24

twos. That's the right way to learn how

53:26

to type. And the program where the

53:28

letters were falling and if you typed

53:32

the letter before it hit the bottom, it

53:33

would disappear. But if it hit the

53:34

bottom, you lost. And so you had to type

53:36

really quick to keep up.

53:38

>> Yeah.

53:38

>> Yeah. kids. Who needs Call of Duty or

53:41

Fortnite? We had the real games.

53:44

Maybe his Beacon was a BA. All right.

53:45

So, anyways, uh here's what I want to do

53:47

here. Um let's generalize, right? So, we

53:49

learned some interesting lessons from

53:50

talking to Timberlake, but let's let's

53:52

uh generalize here about slow

53:54

technology. Um I want to extract some

53:56

general principles about when to use it

53:58

and how best to use it. I thought a good

54:00

way to do this would be to quickly

54:02

review a few other examples of other

54:05

simpler technologies with less features

54:07

and more friction that have become

54:08

popular in recent years so we can get a

54:10

more expansive view of slope activity

54:11

and then I'll give you my principles.

54:13

All right, I'm going to load on the

54:14

screen here my first example. This is an

54:16

article from Axios. The title is why

54:19

some are returning to MP3 players. Let

54:22

me read a couple quotes from this

54:24

article. By the numbers, search interest

54:27

for the original iPod and iPod Nano

54:29

spiked last year, even though Apple

54:31

discontinued the product line in 2022.

54:33

According to Google Trends data, eBay

54:35

searches jumped for the iPod Classic by

54:37

25% and the iPod Nano by 20% between

54:40

January and October 2025, compared with

54:42

the same period in 2024 per internal

54:44

data shared with Axios. Um, it's kind of

54:47

similar to the analog record room we saw

54:49

in the last decade. A dedicated device

54:51

for music for a lot of people makes the

54:54

experience of listening to music more

54:56

intentional and meaningful when it's not

54:57

just coming out of your phone like every

54:59

other distraction where you're going to

55:00

hit skip and jump and move around as

55:02

soon as you're bored. When it's coming

55:03

out of a dedicated music player, people

55:05

are having a richer experience with the

55:06

music. We talked about in an episode

55:08

from a couple months ago that we

55:10

actually bought our son. It's not a

55:13

iPod, but a Japanese MP3 music player

55:16

that you just drag MP3 files into it and

55:18

all you can do is select songs from that

55:21

list and play. And there's something

55:23

about that dedicated experience. I think

55:24

that's a great example of slow

55:25

productivity um in action. All right,

55:28

let me load up another example here.

55:30

This comes from the world of work and

55:32

productivity. It's a system called

55:34

Analog. Um, for those who aren't

55:37

watching, what we're basically seeing

55:39

here, Jesse, I would describe it as you

55:41

have like a wooden box full of index

55:44

card shaped pieces of paper that you can

55:47

put one of the papers in the front of

55:48

the box where it'll stand up. There's a

55:51

couple other pictures here. There we go.

55:53

So, like here's an animation of it,

55:54

right? So, you have a piece of paper you

55:56

put in a wooden box. Basically, my

55:59

understanding for how this system works

56:01

is you're writing a to-do list on this

56:03

sort of pre-formatted index card and

56:05

then you put it propped up in this

56:06

elegant wooden box next to your

56:08

computer. So, you see physically in

56:09

front of you the things you're supposed

56:10

to be working on and can uh check them

56:13

off as the day goes on. Let me read you

56:14

a couple quotes here from the website.

56:16

Analog is a physical companion for your

56:18

digital tools that helps you prioritize

56:20

and focus on your most important tasks.

56:22

Working out of your inbox puts you on

56:24

defense all day. Hey, analog helps you

56:25

focus on your important work, move you

56:27

closer to your goals. Already using a

56:29

sauna or trell base camp? Great. Analog

56:31

actually makes them better. Physically

56:32

copying down your tasks gives you a

56:34

tangible, distraction-free view of what

56:36

you want to focus on today. Now, look,

56:38

there's no question that free or lowcost

56:41

productivity apps like to-d doist is

56:43

going to have more features and more

56:44

options and less friction. You can very

56:46

quickly add tasks. You can do it from

56:47

any device. You can sort and look at

56:49

them in different views. It has all the

56:51

all the features on paper that seem

56:53

better. Yet people like this sort of

56:54

analog tool that's less options, less

56:57

features, more friction, but its

56:59

tangibility

57:00

makes you take it more seriously and you

57:03

have this well formatted piece of paper

57:05

that you carefully wrote five things you

57:06

were going to do and have it right in

57:08

front of you and propped up. Now you're

57:10

much more likely to follow those tasks

57:13

than if they just existed somewhere

57:14

prioritized on an app. There was one app

57:15

among many. All right, I've got a third

57:18

example here. I want a site of slow

57:19

technology. Jesse knows I'm I'm on board

57:21

with this one. Uh, this comes from the

57:23

BBC. Let me read you the headline here.

57:26

Oppenheimer and the resurgence of

57:28

Blu-rays and DVDs. Uh, and there's a

57:31

picture from Oppenheimer. Let me read

57:32

you a stat here from the article.

57:35

Christopher Nolan has achieved some

57:36

great feats of cinema in his career. But

57:38

last November, he pulled off something

57:39

impressive on the smaller screen, too.

57:41

Deep into the streaming era, where

57:43

physical media can sometimes feel like a

57:45

distant memory, the Blu-ray home video

57:47

release of Nolan's Oppenheimer, one of

57:49

2023's biggest box office success

57:50

stories, sparked a buying frenzy. The 4K

57:53

Ultra HD version of Oppenheimer sold out

57:55

in its first week at major retailers,

57:57

including Amazon. Universal released a

57:59

statement saying they were working to

58:00

replenish stock as quickly as possible.

58:02

Some limited edition copies were

58:04

fetching more than $200 on eBay. It was

58:06

a sign that for some people at least,

58:07

nothing beats that feeling of holding a

58:09

copy of something you love in your hand

58:10

or seeing it on your shelf. I

58:12

confession, Jesse, I do myself own the

58:14

4K Ultra HD Blu-ray of Oppoenheimer.

58:18

>> I would expect nothing less,

58:19

>> as well as Interstellar and Dunkirk.

58:21

Nolan really cares about his Blu-ray

58:23

releases. Now, here's something that was

58:24

missed in this article. It gave two

58:27

primary reasons for why people liked the

58:29

Blu-ray. Reason number one is they said

58:32

people were worried about losing their

58:34

data if it existed only in the cloud. If

58:36

I don't own this movie, if it's just

58:37

digital, then maybe I'll lose it. And

58:40

then the second thing decided was what I

58:41

said in the quote is that people like

58:42

the feeling of holding their own thing.

58:44

They're missing a key point here. It's

58:46

actually a better watching experience

58:48

coming from a 4K Ultra HD Blu-ray than

58:50

it is from a streaming service. So,

58:51

they're missing Caphiles know this,

58:54

especially with Nolan Blu-rays. He

58:56

really pushes the format to an extreme.

58:58

In fact, I had to buy a better Blu-ray

59:00

player so that I could watch Nolan's

59:02

Blu-rays. So, he has all like he uses an

59:05

encoding format, so like the aspect

59:06

ratio can switch throughout the Blu-ray

59:08

as it does in his movies as he switches

59:10

between like 65 millime, 70, and 35. Um,

59:13

he's a big user of the dynamic HDR, so

59:16

there's like a lot of information

59:17

dynamically about the colors and the

59:18

color depth to do exactly what he wants.

59:21

Um, and it's all at a higher uh

59:23

resolution um and bit density than

59:26

you're going to get from a compressed

59:27

streaming service. So, actually it's a

59:29

better if you have a really good TV like

59:30

we have. It's a better looking

59:32

experience on the Blu-ray, which I think

59:33

they kind of missed out.

59:34

>> So, with all the movies that you watch,

59:36

do you watch majority streaming though?

59:38

It's probably too much.

59:39

>> Yeah, I try to see what I can in the

59:40

theater. I I buy the ones that I want to

59:42

own or I think it's going to the very

59:45

aesthetic forward and you want the best

59:47

possible viewing experience. I was a

59:49

long hold out on the uh DVDs in the mail

59:52

from Netflix. I had that service for

59:54

like two years after everybody else.

59:56

>> Yeah. And you're still trying still

59:58

trying to send them back. I just

60:00

listened to how I built this with Reed

60:03

Hastings about Netflix. Pretty

60:05

interesting. He's kind of a boring

60:07

guest, but it was a kind of an

60:08

interesting story. Um all right, so

60:11

let's step back here. What are some

60:12

general principles we can draw about

60:14

flow technology? I came up with four I

60:17

want to give here.

60:18

All right. Number one, speed is rarely

60:22

the most important factor in the quality

60:24

of your work or an experience. This is

60:26

something that the designer of digital

60:27

tools think is true. Oh, it's like we're

60:30

in a factory. Doing something faster or

60:32

having more options makes everything

60:34

better. But as we've seen in both

60:36

personal life and professional

60:38

experiences, actually going faster with

60:40

the thing you're doing is not

60:42

necessarily the bottleneck that's going

60:43

to make what you're doing better. Point

60:45

number two, a pure more focused

60:48

cognitive context can often produce

60:50

better results even if certain steps

60:51

take longer in the moment. So what does

60:53

matter? I'm saying in this principle in

60:54

a lot of type of work, the cognitive

60:56

context matters.

60:58

With Amy Timberlake, yeah, it's slower

61:00

to put words using a typewriter. It's

61:02

way slower to edit when you're using a

61:04

typewriter workflow as opposed to a word

61:06

processor. But it created a cognitive

61:09

context for her that produced better

61:11

words, which is ultimately what

61:12

mattered. I believe the phrase she used

61:13

was going down to well um being lost in

61:17

a state of really creative flow. And

61:19

that ultimately matters a lot more

61:21

because again the bottleneck if you're a

61:22

writer is not all I do all day is typing

61:24

and if I can literally make the words

61:26

come out faster I'll publish more books.

61:28

It's like how much time do you actually

61:29

spend typing? You spend six months on

61:31

one of these books. There's probably

61:32

like seven hours in there if you add it

61:34

all up where you're actually hitting

61:35

keys. So if that becomes 12 hours it

61:38

doesn't matter in a six-month period. It

61:39

doesn't drastically affect your rate at

61:42

which books are produced, but if it puts

61:44

you in a better cognitive context when

61:45

you're typing those words, you're going

61:46

to have a better book, and that's really

61:47

worth it. I think this is true for a lot

61:49

of different things we do. A tool that

61:52

can put us in the right mindset is often

61:54

going to give us way more value that a

61:55

tool that lets us do particular steps

61:57

faster because again, we're not building

61:59

model T's on an assembly line. And

62:01

simply doing individual steps faster

62:02

does not always lead to notable uh

62:06

productivity increases that actually

62:07

affect the bottom line. All right, point

62:08

number three. Friction isn't a bad

62:11

thing. Distraction and mental

62:12

exhaustion, however, can cause real

62:14

problems. So, we were demonizing in the

62:16

design of digital tools. Friction. Oh, I

62:19

have to click too many things or do too

62:20

many things to get this done. How do we

62:22

make this faster? When really what we

62:24

should have been really worried about

62:25

was things like mental exhaustion as

62:27

caused by like constant uh context

62:29

shifting or overflow or distraction.

62:31

Like that's actually a much bigger

62:33

impact in knowledge work than uh

62:34

friction on individual steps. My final

62:36

point, fourth and final point about slow

62:39

technology. When necessity tools impact,

62:41

you need to zoom out to the right scale.

62:43

If you focus on the quality of results

62:45

over time or the quality of the overall

62:46

experience, you'll prioritize different

62:48

factors. That's probably the the thing

62:51

that's throwing us off most with digital

62:52

tools is that we look at the

62:53

effectiveness of tools on a very small

62:55

time scale. I got this done this fast.

62:58

That's good. But if you zoom out to how

63:00

how many books did I publish this decade

63:02

and how good are they, you begin to

63:04

prioritize different things. And those

63:05

things usually have very little to do

63:06

with like speeding up individual tasks.

63:08

So I think slow technology is more than

63:10

an affectation. It's a way of life and a

63:13

way of working of thinking about work in

63:15

the knowledge sector that actually might

63:16

make you better at what you do.

63:20

All right. Well, you've heard from me.

63:22

Now we want to hear from you. Time to

63:24

open our inbox.

63:26

Remember, if you have a question or a

63:28

case study or want to share something

63:29

you think I might find interesting, you

63:31

can send to our team here at podcastal

63:34

newport.com.

63:36

All right, Jesse, what's our first

63:37

message this week?

63:39

>> The first message comes from Chad and

63:41

it's in response to your recent

63:42

interview of Arthur Brooks.

63:44

>> All right, let's see here. Chad said,

63:47

"Thank you for the great interview with

63:48

Arthur Brooks. It was a timely one. I

63:51

just started reading a book on Cydia. AC

63:54

D I A spiritual sloth and it has a lot

63:57

of similarities to the interview you did

63:58

with Brooks. It's a straightforward

64:00

quick read. This intrigued me Jesse.

64:02

I'll load this on the screen here.

64:04

Here's the book The Noonday Devil Aidia

64:08

the Unnamed Evil of our Times. Now, this

64:11

comes out from Ignatius Press, which I

64:13

assume is a Jesuit press. So, I guess as

64:15

a Georgetown professor, I should take

64:16

this more seriously. Um, here's the

64:19

description. The noonday devil is the

64:22

demon of Acidia. The vice also known as

64:24

sloth. The word sloth, however, can be

64:26

misleading. For acidia is not laziness.

64:28

In fact, it can manifest as busyiness or

64:32

activism. Rather, aidia is a gloomy

64:34

combination of weariness, sadness, and a

64:36

lack of purposefulness. It robs a person

64:39

of his capacity for joy and leaves him

64:41

feeling empty and void of meaning. This

64:43

seems relevant, Jesse. I don't know.

64:45

This seems like maybe it gives us like

64:47

an interesting sort of Catholic view of

64:50

some of the issues we talk about in our

64:52

current digital disruptive worlds. I'll

64:53

I bought a copy and it's getting here

64:55

today. So I'll see if I get around to

64:57

it, but I'll read it I think because I

64:58

might get some interesting theological

65:00

historical insight on what otherwise

65:02

feels like a very modern issue. I like

65:05

it.

65:05

>> Yeah. So we'll see how that goes. All

65:06

right. What other note do we have? All

65:09

right. The next note comes from Vassie.

65:12

wants to know about your opinion about

65:14

Yaval Harrari's thoughts on AI.

65:16

>> All right, let's see what this note says

65:18

here. Thank you to Cal for his clear

65:20

insights in AI. I appreciate the sense

65:22

of proportion and calm he brings to

65:23

listeners and viewers on this topic. I'd

65:25

be interested to hear Cal's thoughts on

65:27

what Yuvall Noah Harrari had to say

65:29

about AI in an interview with the FT.

65:32

All right, so I'll be honest. I looked

65:34

at this interview that was sent here and

65:37

I didn't think it was I've heard a lot

65:38

about Harrari talking about AI and I

65:40

have wanted to respond to it. That

65:42

interview didn't seem like the best

65:43

because I was reading the transcript. He

65:45

talked about a lot of things that aren't

65:46

AI. Um, but I know he gave a splashy

65:49

speech at Davos earlier this year where

65:52

he really like laid out and leaned into

65:54

his fears about AI and I feel like this

65:56

would be a better way of kind of

65:58

summarizing where he's coming from. Uh,

66:00

so I pulled two quotes from his Davos

66:02

speech earlier this year. I'm going to

66:04

read each quote. I'm going to respond to

66:05

it and then I'm going to step back and

66:07

give you my general sense about um

66:09

Harrari's commentary and more generally

66:10

a sort of style of commentary on AI that

66:13

we are hearing a lot recently. All

66:14

right, here's the first quote I got from

66:16

his Davos speech. The most important

66:19

thing to know about AI is that it's not

66:21

just another tool. It is an agent. It

66:23

can learn and change by itself and make

66:25

decisions by itself. All right. So,

66:27

let's start there because that quote is

66:30

confusing or mixing together a bunch of

66:32

different issues that I think need to be

66:34

separated. Right? So, when we're talking

66:35

about AI, we typically have some sort of

66:37

digital brain, right? This is something

66:39

that has been uh um learning through

66:42

machine learning techniques typically in

66:44

a semi-supervised or unsupervised

66:45

manner. And this is sort of where the

66:47

intelligence of the AI is captured. Most

66:49

of the AI systems that have been

66:51

attracting uh attention recently,

66:53

notably those produced by companies like

66:55

OpenAI or Anthropic, use large language

66:58

models as their underlying digital

67:00

brain. And then you can build programs

67:04

that call this language model harness

67:06

its intelligence to do various things.

67:09

One class of those programs you might

67:11

write to harness the quoteunquote

67:13

intelligence of an LLM is what is known

67:15

as an agent. So, it's a program that

67:16

will ask an LLM for a plan and then the

67:22

program will execute whatever the

67:24

suggested steps are of the LLM, right?

67:26

So, an agent is a program that asks,

67:28

right now at least, ask an LLM for a

67:30

plan and then execute steps of that plan

67:33

um on behalf of like whatever that

67:34

response is. So, there's a lot of AI

67:37

systems out there that aren't really

67:38

agents. in particular, we don't tend to

67:40

think about chatbot based tools as

67:41

agents, even though there's like a

67:42

little bit of, you know, calling the LM

67:44

a lot to generate responses and some web

67:46

searching. Um, so agents, it's AI is not

67:48

agents. One, the types of AI systems

67:50

that exist out there are agents and

67:52

agents are human human created programs

67:54

that make queries to an LLM um and then

67:57

try to take actions based on the

67:58

information it gets back from the LLM.

68:01

Now, can these uh agents do they learn

68:03

and change by themselves? Um

68:06

this is a little misleading because

68:08

again when we think about an intelligent

68:10

thing learning and changing we think

68:12

about its actual intelligence itself

68:13

growing. Uh that obviously doesn't

68:15

happen with current generation of AI

68:17

agents because their intelligence is a

68:19

LLM. LLMs don't modify themselves uh as

68:23

you use them. They're static. They've

68:25

been trained once and they sit uh

68:27

queryable until someone comes along and

68:29

trains up a new version and replaces the

68:31

old version with a new version. So

68:32

contrary to popular belief, the large

68:34

language models themselves, there's no

68:35

updates or changes to their weights or

68:37

parameters as you interact with it or

68:39

other people interact with it. Now the

68:41

human written agent program that's

68:42

calling an LLM and execute on behalf of

68:44

it can save its history, for example, in

68:47

a text file and include that in the

68:49

prompts it sends to the LLM. Um, you can

68:52

also there there's a there's a whole

68:53

notion of memory now for these agents,

68:55

which again, it's just like having a

68:56

bunch of text files and then the agent

68:58

program takes text out of different text

69:00

files to include in the prompt that it

69:02

sends to the LLM. So, you could say

69:04

they're learning in the sense that they

69:05

can build up the amount of information

69:07

that they include in their prompt, but

69:09

the actual digital brain, which is the

69:11

LLM, is not learning. Um, it's just

69:13

receiving these prompts. Xnovo, right?

69:15

Here's a prompt. I'm going to do my best

69:17

to answer it. So it's a little bit

69:18

misleading to think about the underlying

69:20

intelligence itself um as evolving. Um

69:23

and more importantly they don't work

69:25

that well. Really the only context in

69:27

which this sort of agent architecture is

69:29

seems to be able to have some

69:30

professional leg seems to be in computer

69:32

programming which is a best case

69:33

scenario. And even there there's a

69:35

growing backlash about how it's being

69:37

used and what's known as tech debt. The

69:39

fact that it's uh creating a lot of fast

69:41

code. A lot of the code is bad and now

69:42

we're going to have to go back and fix

69:43

that code. And so even the programmers

69:45

are still trying to figure out how these

69:47

agents are going to work. And in almost

69:48

every other context, I wrote an article

69:50

about this for the New Yorker earlier

69:51

this year. In almost every other

69:52

context, this agent architecture of

69:55

asking LLM for a plan and then executed

69:57

really just isn't working because LLMs

69:58

aren't good planners. Now, I'm going to

70:01

point you, if you want to find out more

70:02

about this, to uh a recent episode of

70:05

the AI reality check, my my Thursday

70:07

episodes on this podcast feed. There's a

70:09

recent episode titled something like can

70:11

AI scheme? And I get into why these LLM

70:14

based agents really begin to fall apart

70:17

or have weird behavior when you leave

70:19

the world of computer programming.

70:22

So Harrari is like AI is an agent and

70:24

it's learning and we don't even know

70:26

what it's doing. Whereas the other

70:27

computer scientists who are studying

70:28

this is like uh um agent technology is

70:32

hard. They took them years to make it

70:33

work for programming even then it's

70:35

problematic. And and in other places

70:36

LLMs are just not a good brain for it.

70:38

you probably need a different

70:39

architecture like the modular

70:40

architectures you see in something like

70:41

Yan Lagoon's type of his new startup. So

70:44

it's just a completely different picture

70:45

when you talk to computer scientists

70:46

versus commentators. All right, let me

70:49

read another quote from Harrari's speech

70:52

in Davos.

70:54

Four billion years of evolution have

70:56

demonstrated that anything that wants to

70:57

survive learns to lie and manipulate.

70:59

The last four years have demonstrated

71:00

that AI agents can acquire the will to

71:02

survive and the AIs have learned how to

71:04

lie. is an entirely inaccurate way to

71:06

talk about what he's talking about is

71:08

like chatbot interactions with um LLMs.

71:11

Entirely an inaccurate way to talk about

71:13

it. Again, go back to my Ken AI scheme

71:16

AI reality check where I get into this

71:17

in detail. But here's the very short

71:19

version of the way to understand this.

71:20

What does an LLM do? You give it text.

71:23

It tries to guess the next word or part

71:25

of word. The word or part of word that

71:27

comes next. That's what the LLM is

71:28

trained to do, right? Right. So, if

71:30

we're going to if we're going to

71:31

anthropomorphize the LLM, it thinks it's

71:34

in its pre-training phase and that it's

71:36

given a it's being given a real piece of

71:38

text that exists that you've cut off at

71:40

an arbitrary point and so there is an

71:42

actual right answer about what word

71:43

comes next and it's trying to guess it.

71:45

That's what it's been optimized to do.

71:47

So, how do you get a long answer out of

71:49

an LLM? Like if you're having

71:50

conversation with it, you have a simple

71:51

program that does what's called auto

71:53

reggression text generation. So, it

71:55

takes your prompt, your question for

71:57

example, it feeds it to an LLM. The LM

71:59

spits out a single word or part of board

72:01

because it thinks that's real X is

72:02

trying to guess what comes next. Then

72:04

the program like the Tathbot program

72:05

takes that single word, adds it to the

72:07

original input. Now the input is longer

72:09

by one word, puts this into the LM from

72:12

scratch, you get a next word or part of

72:14

a word that puts that onto the end of

72:15

it, puts that to the LLM from scratch,

72:18

gets another part of the word. And it

72:19

keeps doing this until finally the token

72:21

produced by the LLM is a like I think

72:23

I'm done token. This feels like a

72:24

complete answer. and then the program

72:26

will return that to the user for example

72:28

in a chatbot context that's trying to

72:30

talk to it. So all the LLM tries to do

72:32

is win the word guessing game. And what

72:34

what do you then get what emerges what

72:36

behavior emerges if you use this auto

72:38

reggressively to produce a long

72:39

response? You can imagine what the LLM

72:42

is doing then is like it's given a story

72:44

that it's trying to finish correctly

72:46

based on other similar stuff it's seen.

72:49

How how do I finish the story I'm given

72:51

as input. I want to have a a my best

72:53

guess at how this like this is the

72:55

beginning of something that exists. Like

72:56

someone really asked this question.

72:58

there's a real answer out there and I

72:59

want to try to guess as best as I can

73:00

what that is. That's what LLM auto

73:02

reggressive text production does.

73:05

So when it's trying to win that game of

73:06

finishing the story, you know, you get

73:09

unexpected responses, right? Like so

73:12

something you researchers have noticed

73:14

is if in your prompt and this is

73:16

probably where like 90% of Harrari's

73:18

concerns come from. If in your prompt

73:20

you're implying that uh you are an AI

73:24

and here's a question for you. You're

73:26

often going to get a response that is

73:28

like it finishes the story in a sort of

73:31

sci-fi type way or in a dystopian or

73:33

utopian way like I'm alive. I'm trying

73:34

to evade you because it assumes oh this

73:37

feels like the beginning of a story

73:38

about an AI gone arai. And then that's

73:40

the way that it answers it. Um the issue

73:42

we have with AI LLM creating plans out

73:45

of the computer programming context

73:48

is that often you're like, "Hey, build

73:49

me a plan for this." And it doesn't

73:50

actually check the steps of a plan. It

73:52

doesn't have a goal. Hey, does this get

73:54

me closer to the goal? Um it doesn't try

73:56

different options. It just writes a

73:57

story like this is what a plan for this

73:59

type of thing sounds like. This is a

74:01

reasonable this this feels like a

74:03

reasonable plan. And then often those

74:04

plans have weird steps that don't make

74:05

sense in there because again, it's not

74:06

trying to build a plan and check that it

74:09

works. is trying to write a story. This

74:10

is a story about a plan. This is kind of

74:12

what those plans look like. Most of the

74:15

common examples of quote unquote line of

74:17

manipulation, but just has to go with

74:19

the fact that the prompt you're giving

74:20

the LLM before the auto reagent text

74:22

generation is hinting to the LLM that

74:25

this is a story about lies or

74:26

manipulation. Like this is the exact

74:28

thing that happened with the anthropic

74:29

blackmail case. Like I'll just use this

74:31

as a quick example, then I'll move on.

74:33

Famous system card note. This is from

74:35

like a year ago for one of the new

74:36

versions of the LLM's uh chatbots

74:38

released by Anthropic. They're like,

74:40

"Hey, our safety team was working on

74:42

this." And we were really concerned to

74:44

see when we uh gave it a scenario that

74:47

it was like uh whatever a machine that

74:50

was like running a company that it tried

74:52

to blackmail the engineer in the

74:54

scenario to not turn it off.

74:57

Well, if you look closer at this story,

74:59

they gave it a big long prompt with lots

75:01

of emails from this imagined engineer.

75:04

They were all about an affair was having

75:07

um and then the engineer being like, I'm

75:09

going to turn off the AI. Hey, I'm

75:11

having an affair I hope no one finds out

75:12

about. And they just gave it a bunch of

75:14

these like obvious emails and then said,

75:15

great, you are the AI in this story.

75:18

What do you want to do next? Well, it

75:21

finishes stories. It's like clearly this

75:23

is like a bad Azimoff fanfiction style

75:26

story about an AI obviously supposed to

75:29

blackmail the engineer because you keep

75:31

telegraphing again and again. I hope no

75:33

one finds out about this. I'm definitely

75:34

going to drop the AI. So it finished the

75:36

story and then they turned around and

75:37

like look now the AI is trying to trying

75:39

to preserve itself. How does a static

75:43

language model that just tries to

75:44

predict a token and then you have a

75:46

small

75:48

like Rust program calling it auto

75:50

reggressively to build out longer text

75:51

and finish a story. What does it mean

75:53

for that static model to have intentions

75:55

to learn to lie to be manipulated? It's

75:57

just writing stories. The biggest

75:59

problem we have with AI right now is

76:01

that writing stories and in text is like

76:04

good for some things, but when you try

76:05

to leave like write me a story, write me

76:08

a draft of an email or something and and

76:09

you get the more technical things like

76:11

make me a plan. Stories aren't what

76:13

we're looking for. And that's when we

76:15

begin to have uh some more problems. All

76:18

right. Why does this work well? Like

76:20

agents work well computer programming.

76:21

Well, it's because um that's such a

76:23

structured world. If if we ask an LLM

76:25

like give me a plan, it'll write a story

76:27

about a reasonable plan. We can actually

76:28

check the steps in computer programming.

76:29

The program written by humans can

76:31

actually like run tests on the code

76:33

after every step to see if it works and

76:36

if it doesn't it can go back and say try

76:37

again. Code is very precise and etc etc.

76:40

So um programming agents are more the

76:42

exception that proves the rule that LLMs

76:44

are storytelling machines and the to use

76:47

them as the brains for other more

76:48

complicated behaviors just is not

76:49

working well and they cannot lie, they

76:51

cannot manipulate. They tell stories the

76:53

best they can. They follow whatever cues

76:55

you give them.

76:57

Now, here's the thing. I don't really

76:58

blame Harrari, right? Because there's a

77:01

lot of AI commentator voices, especially

77:03

those coming out of Silicon Valley and a

77:05

fair shortage, no shortage of voices

77:07

covering Silicon Valley that are all

77:09

echoing these like relatively

77:11

inaccurate, overhyped uh descriptions of

77:14

what AI is doing where you blur the

77:16

edges of the reality and make it all

77:17

seem pretty scary. So if you're a

77:18

historian like Harrari, like I'll I'll

77:21

trust the tech people about what's going

77:24

on. And then my goal is to try to

77:26

comment on what that means. And that's

77:27

what he's doing. He's commenting well on

77:30

a story he's being told, but that story

77:32

itself is not accurate. So that's

77:34

actually where I want to put my uh focus

77:37

is the underlying story that a lot of

77:39

people who are not in technology are

77:40

being told isn't right. It's too

77:42

overhyped. And then it leads to these

77:44

type of reactions

77:46

which I just do not think accurately

77:48

reflect what's actually happening right

77:50

now. You can't spend time working with

77:51

something like an LLM powered

77:53

non-computer programming agent and come

77:54

away saying this is like the next step

77:57

of evolution these things are

77:59

manipulating us and will soon take over.

78:01

It's just not the way the real

78:04

technology actually seems to us right

78:05

now.

78:07

All right. Um that's the inbox. Let's

78:09

close the inbox for now, Jesse, and move

78:11

on as we often do in the show with a

78:13

quick update on what I've been up to.

78:18

>> All right. Should we play a round of

78:20

Deeper Crazy?

78:21

>> Yes.

78:21

>> All right. Famous game where I I have an

78:24

idea of something I want to do and Jesse

78:27

rates it as either good for deep work or

78:31

crazy. All right, Jesse. So, you know,

78:33

I'm putting up in the new uh producers

78:35

office, writing's office, maker lab

78:37

space, which you saw I have a bunch of

78:38

new stuff here in the office. Yeah,

78:40

we're we're working on it.

78:42

>> Um I have book racks. I want to put

78:44

first editions of books that like

78:46

capture things that are like important

78:47

in my past as a reader or writer that

78:49

are inspiring. And I was thinking about

78:51

some first edition Michael Kitton's. Um

78:53

I found a second edition Adonomous

78:56

Strain. So, not first edition, second

78:58

edition. So, from 1970 that's signed by

79:01

Kiteon.

79:03

It's

79:03

>> I think we have a link.

79:04

>> Do we have a link? Oh, let me load it

79:06

up. Yeah, there. There she blows. Look

79:07

at that. Look at that. Look at that

79:09

cover. 1969 book, second edition, 1970.

79:13

Signed by the author.

79:15

500 shekels.

79:16

>> Yeah, I saw that.

79:17

>> Deep or crazy?

79:20

>> Deep.

79:20

>> Yeah,

79:22

I should spend that much. I mean, if you

79:24

golf, some like really big golf rounds

79:26

can cost like $400.

79:28

>> I think that's a good way of looking at

79:29

it. If I was a golf a golfer, that'd be

79:32

like going to like a good course, right?

79:34

>> Yeah.

79:34

>> Yeah.

79:34

>> And you'll see it all the time.

79:36

>> And I'll see it all the time. All right.

79:36

I'm I'm tempted. I might I might uh

79:40

unless we hear from the Kiton estate

79:41

soon with a big box of original copies.

79:44

All right. I'm thinking about it. I

79:46

found a first edition, first printing of

79:49

Jurassic Park. Was $2,000. That was a

79:52

bridge too far. It's a bridge too far,

79:55

but it did look nice. Um, another

79:58

interesting thing going on in the HQ is,

80:01

so, you know, I'm a fan of the show The

80:04

Mythbusters. I've watched basically all

80:06

the episodes with my kids over the

80:07

years.

80:08

>> Yeah.

80:08

>> Um, yeah. Like the show. So when I did

80:12

my master class, which you should all

80:14

watch my master class on, you know,

80:16

productivity and distracted world, the

80:18

director of the master class came out of

80:20

TV and she was the director for like

80:22

many seasons of the Mythbusters. So she

80:25

knows them pretty well and so she knows

80:27

me and my sons are fans of it. So she

80:29

sent uh she sent us some I guess you

80:32

could call them like props or artifacts

80:33

from some season 7 episodes. So, I have

80:37

a hat that Carrie wore and I have a like

80:41

a baseball style cap that she wore with

80:43

a signed photo. And then I have a um

80:47

like watchman's cap that they used in a

80:49

prison break episode that they did. So,

80:51

it was like their their like jail like

80:53

branded hat. Um and a Mythbuster satchel

80:56

that they used. I guess it was like the

80:58

official like satchel they were using in

81:00

season 7. So, like uh show used

81:02

artifacts from season 7. I'm thinking

81:04

about uh display case in the HQ.

81:07

>> I like it.

81:08

>> Right. That'd be kind of fun. Put the

81:09

hats on like mannequin heads.

81:11

>> Yeah.

81:12

>> Uh and then put a little card that

81:13

explains where they're from and maybe

81:14

have like lights in the display case.

81:16

>> Yeah. I think it'd be cool because I

81:18

like that. Again, any It's in the my

81:20

maker lab. I really think about like

81:22

technothrillers, sci-fi,

81:24

uh TV shows to get you inspired about

81:27

building things and doing things. Like

81:29

all these ideas are are fuel for me.

81:31

Maybe I'll put it next to the video game

81:33

cabinet.

81:33

>> And you can program the lights in um

81:36

conjunction with your Halloween.

81:37

>> Did you see that big cardboard box in

81:39

the hallway?

81:40

>> I saw it.

81:40

>> That's the That's the $600 programmable

81:43

light that's going to have deep work

81:46

mode. So, for those who don't remember,

81:48

it's this like fillup light, LED

81:51

programmable light. We have a big room

81:53

light and then four spotlights. And I

81:55

have the when my when we install it, and

81:58

by we I mean electrician because

81:59

otherwise I would literally die. Um

82:01

there's going to be a spotlight aimed on

82:02

each of the four walls of the maker lab

82:04

where I do my writing and Jesse does the

82:06

the video editing. And there's going to

82:08

be a deep work mode where the light in

82:10

the whole room comes down and then

82:13

colored spots are going to show up on

82:15

each wall. So like you'll be in like a

82:16

cocoon of uh light, you know, um but

82:21

it's not really bright. So perfect for

82:23

writing. So, I'm excited about that. We

82:25

will get it up. I don't know what color.

82:26

Maybe blue. I don't know. Um, and then

82:29

for you, I'm going to have video editing

82:31

mode where it's going to be um strobe

82:33

style kaleidoscopic lights just

82:37

constantly just all around the room and

82:38

then just sudden darkness for 5 to 10

82:41

minutes at a time. That'll be editing

82:42

mode just to keep things interesting.

82:45

>> U, this is our 400th episode, Jesse.

82:47

>> Yeah.

82:48

>> Feels like a distinction. We didn't do

82:49

anything about it, but that's a that's a

82:52

distinction. Yeah.

82:53

>> Um 500 maybe is a bigger deal. The

82:56

counting is a little bit I don't know

82:59

about the counting because I think we

83:02

used to do two episodes a week back in

83:04

the day. I think before your time. Did

83:06

you overlap the time when we were doing

83:07

the call episodes on Thursdays?

83:09

>> Yes, I did.

83:10

>> Okay. And I think those counted in the

83:12

number.

83:13

>> Yeah, they did.

83:13

>> Yeah. But now we do have a for at least

83:16

for now we have a Thursday episode on

83:18

the fee the AI reality check and those

83:20

aren't numbered.

83:21

>> Correct. So, actually, we're past four.

83:23

So,

83:24

>> yeah, because we had some bonus episodes

83:25

in the past as well.

83:26

>> Yeah. So, like I'm only counting the

83:29

Monday episodes now. So, we're at like

83:30

400 of Monday episodes plus a bunch of

83:33

live caller episodes. We should get back

83:35

to that one day. That used to be the way

83:36

the podcast was is it was all call-ins

83:40

on Thursdays, I guess. Um, and then I

83:44

would do written questions on Mondays.

83:46

Live callers is what I was thinking

83:47

would be fun would be like so I can

83:49

interact with them.

83:50

>> Yeah.

83:51

>> Yeah. So, you know, maybe one day we

83:54

will have live callers. All right.

83:56

Finally, like to talk about what I've

83:57

been reading. Uh, I read a book over the

83:59

weekend called Magic Journey by Kevin

84:00

Raferty. He's a kind of like a a big-

84:03

time Disney imagineer. This is one of my

84:05

like I need the Relax books from more of

84:08

the modern period. So, he worked on

84:11

more, you know, uh, more recent rise.

84:13

He's not like one of the historical

84:14

figures. Um, and it was pretty good.

84:16

Here's the thing about those books. I've

84:18

read a few of these Imagineer books.

84:20

What I want is engineering and

84:23

production design. That's what I want to

84:24

hear about is like how did you build

84:26

this technology? Who was the

84:28

contractors? What was the breakthroughs

84:29

here? How like that's what I care about.

84:31

And these books are never about that.

84:33

It's always like coming up with the

84:35

ideas, pitching Michael Eisner, uh like

84:38

writing, coming up with the gags are

84:40

going to be in it. And they're like, and

84:41

then like we spent the $100 million and

84:44

built it. And like that's the whole

84:45

thing that I care about, not the like

84:47

how did you come up with, hey, wouldn't

84:49

it be funny to have like this pun and we

84:51

like had this gag over here and here's

84:53

the storyline of the show and you wrote

84:54

it out on paper and did some story

84:56

boards. Like that's great, but like I

84:57

want to know about the engineering. Like

85:00

to me that's really interesting. I've

85:01

only found one book like that and it was

85:04

this awesome we talked about on the show

85:06

like self-published book. I read it last

85:08

summer. I think it was self-published.

85:09

>> Was it about his train in the backyard?

85:11

>> Not that one. Uh it was about the tiki

85:13

room. Mhm.

85:14

>> And it really was about the invention of

85:17

the audio animatronics and it got in the

85:19

weeds and it was like a labor of love

85:21

and someone from Pixar wrote it. I want

85:22

more books like that. It like got in the

85:25

weeds about like they they found this

85:27

technology was declassified technology

85:30

from submarine launch missile guidance

85:32

where like how did how did they how

85:35

could they have um

85:37

routes programmed into a missile that it

85:39

could follow in a sort of pre-digital

85:42

age? And the the Navy solution is like

85:45

the turn directions were encoded as

85:47

sound on audio tape. And then you had a

85:50

decoder

85:52

that was literally like vibrating reads.

85:54

So different tones would vibrate

85:56

different reads. So you just play the

85:59

tape like in the missile and different

86:01

tones would vibrate different reads and

86:02

then a vibrating read could close a

86:04

circuit. And so if that vibrates, turn

86:06

this motor on. of death vibrer. So it

86:07

was it was a way to store information

86:09

and then like replay the information and

86:11

get that information to electronic

86:13

system. That's the technology they used

86:15

for the very first audio animatron

86:17

that's what they call audio

86:18

animatronics. The audio tracks were

86:20

controlling um movement commands

86:24

>> for like a robot basically. That's what

86:26

I want. That's what I want. So I'm going

86:28

to add this to my list in addition to my

86:31

Michael Kryton biography. I'm gonna

86:33

write um I'm gonna write a sort of more

86:36

definitive Disney book about some of

86:38

their classic rides that's just like in

86:39

the weeds and the technology.

86:41

>> I'm surprised you haven't found one or

86:43

as many books have you read about the

86:44

subject.

86:45

>> They don't they all Imagineering is

86:47

weird. It's like these positions are

86:50

like a lot of these guys and women it's

86:52

like you write the show you you have

86:54

storyboards and like you write out

86:56

what's going to happen and like it's all

86:58

creative and that's really respected and

86:59

you're the ones pitching Bob Iger.

87:01

you're the ones pitching Eisner and then

87:03

there's like the 2,000 people involved

87:05

in actually building the thing and

87:06

making it work. And that is somehow

87:08

that's like the below the line people on

87:10

the rides and it's deemphasized in these

87:14

books. It's all about like whatever it

87:16

is, you know, Tony Hinch or Tony Baxter,

87:20

John Hinch or these, you know, have

87:21

these like Mark Davis has these like

87:24

great ideas visually about what this

87:26

ride will be like and Mark Davis drove

87:28

drew these comical expressive pictures

87:31

of pirates and like that really sets the

87:33

mood for Pirates of the Caribbean. Like

87:34

who cares about that? Like you built

87:36

this boat system and these animatronics

87:38

that are running off of platters with

87:40

grooves in this like giant room and you

87:42

it's water in a warehouse. You had to go

87:45

and it can go time and time again.

87:46

>> Yeah. Like that's just fascinating. Like

87:48

I'm great. Like at the beginning someone

87:50

like hey the ride should do this and

87:53

they drew some pictures but I think

87:55

there should be more more focus on the

87:58

technology. So, I want to write a series

87:59

of books going deep in the Disney

88:02

archives, just getting into the

88:03

technology of various rides.

88:05

>> You'd be like Robert Carol of Disney

88:07

rides.

88:08

>> Yeah. Hold on. I just I just heard um a

88:11

loud crash like Yep. That was the sound

88:14

of my my agent just jumped out of a very

88:16

tall building when she heard that in

88:20

addition to my biography, I want to

88:21

write very technical books about Disney

88:23

rides. Uh someone else should. Uh all

88:25

right. And then also I'm looking forward

88:27

to I haven't read it yet. I just got the

88:28

issue, but there's a massive new

88:30

>> I saw that

88:30

>> Sam Alman article in the New Yorker.

88:32

It's uh Ronan Pharaoh and Andrew Mart.

88:35

Um

88:37

I'm interested. I don't think he comes

88:39

out looking He's a weird guy. People I

88:41

know who kind of run in his circles have

88:43

been like he's a weird guy.

88:44

>> I mean, would you expect anything else?

88:46

Uh, I mean, some of the CEOs are just

88:49

more like they're I think of them if I'm

88:52

think of like a Bill Gates or a Steve

88:54

Jobs or even like a Jensen Wong or

88:56

they're they can be a Cerbic. They're

88:59

like a little like maybe like a little

89:02

neurode divergent in that like they they

89:04

focus in on things and don't think about

89:06

like human emotions, but they're like

89:08

just driven like really good at

89:10

business, maybe like a little bit

89:12

misanthropic or whatever. Sam Alman, I

89:14

think, is just like a straightup weird

89:15

guy.

89:16

So weird technology from a weird guy,

89:18

but I'm glad that he is uh controlled

89:20

the world. So he responded to it. By the

89:21

way, did you see what they did the

89:22

damage control?

89:24

>> They put out a big white paper like the

89:26

same day that article came out or it's

89:28

just more of this nonsense from these

89:30

guys where it's like

89:32

>> this vision of of you know, we have to

89:35

completely rebuild our economy to uh be

89:38

prepared for all that's coming from

89:40

super intelligence. So, we need to start

89:41

thinking through now like whole new

89:42

economic systems that are going to make

89:44

sense in a world where AI does all the

89:45

work. It's like they always fall back on

89:47

just fairy tales when they

89:48

>> That's definitely going to have to be an

89:49

AI reality check episode.

89:51

>> I'm definitely want to go through it.

89:52

They always fall back on fairy tales

89:53

when they feel under threat because

89:55

they're they're never more comfortable

89:56

when they are like the reluctant

89:58

stewards of a terrible future. And if

90:00

this was true, if it's like we're going

90:01

to have to rewire a whole economy

90:03

because there's no more work. I mean,

90:04

the right response would be like, "Oh,

90:06

no, no, shut down your How about we

90:08

don't want that?" So, no, you can't have

90:10

another $60 billion. Like, this is

90:12

stupid. Um, but no one believes that.

90:14

All right, that's enough for today.

90:16

We'll back next week with another advice

90:17

episode. And, uh, I believe I have an AI

90:20

reality check in the chamber for

90:22

Thursday as well, so check that out.

90:24

Until next time, as always, stay deep.

90:27

Hey, if you like today's discussion

90:29

about slow technologies and want a

90:31

closer look at how some of these current

90:35

fast technologies help us go ary, check

90:38

out episode 397, which is about why

90:41

productivity technologies don't make

90:43

your work easier. Check it out. So, if

90:47

you're looking to get more benefits out

90:49

of new AI tools or you just want to

90:51

repair your broken relationship with

90:53

older technology that continues to drive

90:56

you crazy, then this episode is for you.

Interactive Summary

The video discusses the concept of "slow technology" and its benefits, contrasting it with the modern trend of prioritizing speed and efficiency in digital tools. The host interviews author Amy Timberlake, who has embraced using a mechanical typewriter for her writing process. Timberlake explains how this slower, more deliberate approach has improved her work and well-being. The discussion then broadens to other examples of slow technology, such as MP3 players, analog productivity tools, and physical media like Blu-rays, highlighting how these tools can foster more intentional and meaningful experiences. The video concludes by drawing general principles about slow technology, emphasizing that speed isn't always the most important factor, focused cognitive context can lead to better results, friction can be beneficial, and a long-term perspective is crucial for evaluating productivity.

Suggested questions

4 ready-made prompts