HomeVideos

How AI will change software engineering – with Martin Fowler

Now Playing

How AI will change software engineering – with Martin Fowler

Transcript

3108 segments

0:00

What similar changes have you seen that

0:03

could compare to some extent to AI in

0:05

the technology field?

0:06

>> It's the biggest I think in my career. I

0:08

think if we looked back at the history

0:10

of software development as a whole, the

0:12

comparable thing would be the shift from

0:14

assembly language to the very first

0:15

highle languages. The biggest part of it

0:17

is the shift from determinism to

0:19

non-determinism and suddenly you're

0:20

working in a non with an environment

0:22

that's non-deterministic which

0:24

completely changes.

0:25

>> What is your understanding and take on

0:27

vibe coding? I think it's good for

0:29

explorations. It's good for throwaways,

0:31

disposable stuff, but you don't want to

0:33

be using it for anything that's going to

0:35

have any long-term capability. When

0:36

you're using vibe coding, you're

0:38

actually removing a very important part

0:40

of something, which is the learning

0:42

loop. What are some either new workflows

0:44

or new software engineering approaches

0:45

that you've kind of observed? One area

0:47

that's really interesting is Martin

0:49

Fowler is a highly influential author

0:51

and software engineer [music] in domains

0:52

like agile, software architecture, and

0:54

refactoring. He is one of the authors of

0:56

the Agile Manifesto in 2001, the author

0:58

of the popular book Refactoring, [music]

1:00

and regularly publishes articles on

1:02

software engineering on his blog. In

1:03

today's episode, we discuss how AI is

1:06

changing software engineering, and

1:07

[music] some interesting and new

1:08

software engineering approaches LLMs

1:10

enable, why refracting as a practice

1:12

[music] will probably get more relevant

1:13

with AI coding tools, why design

1:15

patterns seem to have gone out of style

1:16

the last decade, what the impact of AI

1:18

is on agile practices, and [music] many

1:21

more. This podcast episode is presented

1:22

by Statsig, the unified platform for

1:24

flags, [music] analytics, experiments,

1:26

and more. Check out the show notes to

1:28

learn more about them and our other

1:29

season sponsor. If you enjoy the show,

1:31

please subscribe to the podcast on any

1:33

podcast platform and on YouTube. So,

1:35

Martin, welcome to the podcast.

1:37

>> Well, thank you very much for having me.

1:38

I didn't expect to be actually doing it

1:40

face to face with you. That was rather

1:42

nice.

1:42

>> It's all the better this way. Uh I

1:45

wanted to start with learning a little

1:46

bit on how you got into software

1:48

development which was what 40ish years

1:51

ago.

1:52

>> Yeah. It was yeah it would have been uh

1:56

late 70s early 80s. Yeah. I mean like so

2:00

many things it was kind of accidental

2:02

really. Um at school I was clearly no

2:06

good at writing because I got lousy

2:08

marks for anything to do with writing.

2:11

>> Really?

2:11

>> Yeah. Oh absolutely. Um, but I was quite

2:14

good at mathematics and that kind of

2:16

thing and physics. So, I kind of leaned

2:18

towards engineering stuff and I was

2:22

interested in electronics and things cuz

2:23

the other thing is I'm hopeless with my

2:25

hands. I can't do anything requires

2:28

strength or physical coordination.

2:30

So, all sorts of areas of engineering

2:32

and building things, you know, I've

2:34

tried looking after my car and, you

2:36

know, I couldn't get the the rusted nuts

2:38

off or anything. You know, it was

2:39

hopeless. So, but electronics is okay

2:42

because that's all very, you know, it's

2:45

more than in the brain than, you know,

2:46

you need to be able to handle a

2:47

soldering iron, but that was about as

2:49

much as I needed to do. And then

2:51

computers and it step easy. I don't even

2:52

need a soldering iron. So, I kind of

2:56

drifted into computers in that kind of

2:58

way. And uh that was my route into into

3:02

software development. Before I went to

3:04

university, I had a year um at working

3:06

with the UK at Atomic Energy Authority.

3:09

Wow. or ukulele as as we call it. Um and

3:13

I did some programming in forran 4 and

3:17

um it seemed like a good thing to be

3:19

able to do. And then when I finished my

3:22

degree, which was a mix of electronic

3:24

engineering, computer science, I looked

3:25

around and I thought, well, I could go

3:26

into traditional engineering jobs, which

3:30

weren't terribly well paid and weren't

3:31

terribly high status, or I could go into

3:33

computing where it looked like there was

3:34

a lot more opportunity. And so I just

3:36

drifted into computing. And and this was

3:38

before the internet took off. This was

3:41

>> what what what kind of jobs were there

3:42

back then that that you could get into?

3:44

What was and what was your first job?

3:46

>> Well, my first job was with a consulting

3:47

company Koopers and Librand or as I

3:49

refer to them, Cheetum and Lightum

3:52

and um we were doing advice on

3:57

information strategy in the particular

3:58

group I was with although that wasn't my

4:01

job. My job was I was one of the few

4:03

people who knew Unix because I'd done

4:05

Unix at college and so I looked after a

4:07

bunch of workstations that they needed

4:09

to do to run this weird software that

4:11

they were running to help them do their

4:13

strategy work and then I got interested

4:15

in the what they were doing with their

4:17

strategy work and kind of drifted into

4:18

that. I look at it back now and think,

4:21

god, that was a lot of snake oil

4:22

involved. But hey, it was my route into

4:24

the into the industry and it got me

4:26

early into the world of object-oriented

4:29

thinking and that was extremely useful

4:32

to get into objects in the mid80s

4:36

>> and and how how did you get into like

4:38

object-oriented was back then back we're

4:42

talking probably the mid80s that was a

4:44

very kind of radical thing

4:46

>> and you said you were working at a

4:47

consulting company which didn't seem

4:49

like the most cutting edge. So how does

4:50

a two plus two get together? How did you

4:52

get to do cutting edge stuff?

4:53

>> Because this little group was into

4:55

cutting edge stuff and they had run into

4:57

this guy who had some interesting ideas,

5:00

some some very good ideas as well as

5:02

some slightly crazy ideas. And he

5:04

packaged it up with the term object

5:06

orientation, which wasn't really the

5:08

case, but it was it kind of, you know,

5:10

it's part of a snake oil as it were. I

5:12

mean, that's a little bit cruel to call

5:13

it snake oil because he had some very

5:15

good ideas as well. Um but that kind of

5:18

led me into that direction and and of

5:20

course in time I've found out more about

5:22

what object orientation was really about

5:24

and uh that events led to my whole

5:28

career

5:28

>> in in the next 10 or 15 years. How did

5:30

you make your way and eventually end up

5:32

at Thought Works and and also you

5:34

started to write some some books, you

5:35

started to publish on the side. How did

5:37

you go go from like someone who was

5:39

brand new to the industry and kind of

5:40

wideeyed and just taking it all in,

5:42

learning things to starting to slowly

5:45

become someone who was teaching others?

5:47

>> Well, here again bundles of accidents,

5:50

right? So, while I was at that

5:52

consulting company, I met another guy

5:54

that they had brought in to help them

5:56

work with this kind of area, an American

5:58

guy um who became the really the biggest

6:01

mentor and influence upon my early

6:03

career. His name is Jim Odell and he had

6:06

been an early um adopter of information

6:09

engineering and had worked with in that

6:12

area and he was he saw the good parts of

6:16

uh these ideas that these these folks

6:18

were doing and he was an independent

6:21

consultant and a teacher and so he spent

6:24

a lot of his time doing work along those

6:26

lines. I left Coopers and Librand after

6:28

about a couple of years to actually join

6:30

this the crazy company which is called

6:32

PEK. Um and um I was with them for a

6:36

couple of years. It was a small company.

6:38

There was a grand total of four of us in

6:40

the UK office and that was the largest

6:42

office in the company.

6:43

>> Wow. [laughter]

6:43

>> Kind of thing. Um and um so I did I saw

6:47

a bit of you know having seen a big

6:49

company's um craziness. I then saw a

6:52

small company's craziness. did that for

6:54

a couple of years and then I was in a

6:57

position to go independent and I did um

6:59

helped greatly by Jim Odell who was um

7:02

who fed me a lot of work basically um

7:05

and also by some other work I got in the

7:08

U in the UK and that was great. I I

7:11

remember leaving PEK and thinking that's

7:13

it independence life for me. I'm never

7:15

going to work for a company again.

7:18

>> Famous last words.

7:19

>> Exactly. And um I carried on. I did well

7:23

as an independent consultant um

7:25

throughout the '9s and during that time

7:28

I wrote my first books. I moved to the

7:32

United States in 93

7:35

um and I was doing very very happily and

7:39

obviously got the rise of the internet,

7:41

lots of stuff going on in the late 90s.

7:43

It was a it was a good time and I ran

7:45

into this company called Fort Works and

7:47

they were just a client. I would just go

7:49

there and help them out. Yeah. The story

7:51

gets more. I had had met Kent Beck and

7:54

worked with Kent at Chrysler, the famous

7:57

C3 project, which is kind of the birth

7:58

project of extreme programming. So I'd

8:00

worked on that,

8:01

>> seen extreme programming, seen the agile

8:03

thing. So I'd got the object orientation

8:05

stuff, I got the agile stuff, and then I

8:07

came to Fort Works and uh they were in

8:10

tackling a big project, a big project

8:12

for them at the time. Still sizable,

8:14

about 100 people working on the project.

8:15

So, it's a sizable piece of work and it

8:19

it was clearly going to crash and burn.

8:22

Um, but I was able to help them um both

8:25

see what was going on and how to avoid

8:29

crashing and burning and they figured

8:31

out how to sort of recover from the from

8:34

the problem. Um, but then invited me to

8:37

join them and I thought, hey, you know,

8:38

join a company again maybe for a couple

8:40

of years. They're really nice people.

8:41

They're my favorite client. You know, I

8:44

I always thought of it as other clients

8:45

would say, "These are really good ideas,

8:47

but they're really hard to implement."

8:49

And while Thoughtworks would say, "These

8:51

are really good ideas. They're really

8:53

hard to implement, but we'll give it a

8:54

try." And they usually pulled it off.

8:57

And so I thought, "Hey, with a client

8:58

like that, I might as well join them for

9:00

a little while and and see what we can

9:01

do." That was 25 years ago.

9:04

>> Yeah. And then fast forward today, your

9:06

title has been for I think over a

9:08

decade, chief scientist.

9:09

>> Since I joined, that was my title at

9:11

join.

9:12

>> Since you joined. So I have to ask what

9:13

does a chief scientist at Thought Works

9:15

do?

9:16

>> Well, it's important to remember I'm

9:17

chief of nobody and I don't do any

9:19

science. [laughter]

9:21

The title was given because that title

9:23

was used a fair bit around that time for

9:27

some kind of public facing ideas kind of

9:31

person. If I remember correctly, Grady

9:32

Buch was chief scientist at Rational um

9:35

at the time

9:36

>> actually. True.

9:37

>> And um and there were other people who

9:39

had that title. So it was a it was a

9:41

high looting very pretentious title but

9:44

they felt it was necessary. It was weird

9:46

because one of the things of

9:47

Thoughtworks at that time was you could

9:49

choose your own job title. Anybody could

9:51

choose whatever job title they like. But

9:53

I didn't get to choose mine. I had to

9:55

take the chief scientist one. They

9:57

didn't like titles like flagpole or

9:59

battering ram or um [laughter]

10:02

or loudmouth which is the one I most

10:05

prefer. And one thing that Thought Works

10:08

does every six months and the latest one

10:10

just came out is the Thought Works Radar

10:12

>> and this latest radar, it just came out

10:15

I think a few days ago. It's the

10:16

>> Today it was launched I think

10:17

>> actually it was today. So by the time

10:19

this is in production it will have been

10:20

a few weeks but

10:22

>> uh it's actually really really fresh. So

10:24

I just looked at it and things that it

10:26

it lists. But I'll just list a few

10:28

things that I saw there and the adopting

10:29

which is the the ones that they

10:31

recommend using pre-commit hooks click

10:33

house for database analytics vlm this is

10:36

for learning LLM on on cloud or on

10:38

on-rem in a really efficient way for

10:40

trialing cloud code fast MCP which is an

10:42

MC framework for MCB servers and they're

10:45

they're also recommending a lot of

10:47

different things related for example to

10:49

AI and LMS to assess uh can you share a

10:52

little bit of how Thorkworks comes up

10:54

with this technology radar what's the

10:56

process And it it feels very very kind

10:59

of on the pulse every time like it feels

11:01

close to the pulse of the industry and

11:03

again I I talk with a lot of other

11:04

people. How do people at Thought Works

11:08

stay this close to what is happening in

11:10

the industry?

11:11

>> Okay. Yeah. Well, this will be a bit of

11:13

a story. Okay. So, it started just over

11:15

10 years or so ago. Its origin was one

11:19

of the things that we've really pushed

11:20

at footworks is to have technical

11:22

people, practitioners

11:24

really involved at v or various levels

11:27

of running the business and one of the

11:30

leaders of that um was our former CTO

11:33

Rebecca Parsons. So Rebecca became CTO

11:37

and she said I want an advisory board

11:40

who will keep me connected with what's

11:42

going on in projects. So she created

11:45

this technology advisory board and it

11:48

had a bunch of people whose job was to

11:50

brief her as to what was going on. We'd

11:52

meet you know two or three times a year.

11:54

She had me on the advisory board not so

11:56

much for that reason but because I was

11:57

very much sort of a public face of a

11:59

company. She wanted me present and

12:01

involved in that. And originally that

12:03

was just our brief. We would just get

12:04

together and we'd talk through this

12:05

stuff. And then one of these meetings um

12:09

Daryl Smith who was actually her TA at

12:12

the time technical assistant

12:14

um he um said we what we got all these

12:18

projects going on it would be good to

12:19

get some picture what kinds of

12:21

technologies we're using and how useful

12:23

they are and so as to better exchange

12:25

ideas because we like so many companies

12:27

we struggle to percolate good ideas

12:29

around enough I mean even then when

12:31

we're only just a few thousand it

12:33

struggled and we're 10,000 now so yeah

12:35

it's So we thought okay this is a nice

12:38

idea and he came up with this idea of

12:39

the radar metaphor and the rings of the

12:41

radar that we see today and we had

12:43

little meeting and we created the radar

12:45

and it's it's a habit if we do something

12:47

for internal purposes we try and just

12:49

make it public

12:50

>> and that's always been a strong part of

12:51

the works ethos it's part of why I'm

12:53

there of course is you know we just we

12:56

talk about everything that we do and we

12:57

share everything we we we give away our

12:59

secret source all the time so we did

13:01

that and people were very interested and

13:03

so we continued doing it now the process

13:05

has changed changed a bit over time. At

13:07

that original meeting, many of the

13:09

people that were in the room were

13:11

actually hands-on on projects, advising

13:13

clients all the time. Now, as we've

13:15

grown an order of magnitude, um it's

13:18

much harder to do that. And we've also

13:20

created more of a process where people

13:22

can submit blips, nominate them. A blip

13:26

is being a point on the radar, an entry

13:29

>> and um they will go to somebody that uh

13:34

either connected through geographically

13:36

or through the line of business or or

13:38

technology or whatever and say, "Hey, we

13:41

think this technology is interesting."

13:43

They'll brief us a little bit about it.

13:44

And then they brief the members of the

13:47

what's now called the Doppler group

13:48

because we make a radar. Yeah. I mean,

13:51

we can be a bit loose with our metaphors

13:53

at times. Um and they and then at the

13:56

meeting we'll decide which of these

13:57

blips to put on the radar and not and

13:59

obviously you get some crosspollination

14:01

because somebody will say oh yeah I

14:02

talked to somebody about this as well

14:03

and and so it's very much this bottomup

14:06

exercise

14:08

and that's how it's created now. So we

14:11

will have these we will do blip

14:12

gathering sessions about a month or two

14:14

before the radar meeting and gradually

14:17

shake them up and then in the meeting

14:18

itself we go through them one by one and

14:21

for me it's a bit weird because I'm so

14:23

detached from the day-to-day these days

14:25

that it's just this this lineup of

14:27

technologies and things I have no idea

14:29

what most of them are but interesting to

14:31

hear about and sometimes I latch on to

14:33

certain themes or something like that.

14:35

Um and that was an important part of

14:37

microservices about 10 years ago because

14:39

that came up in through that radar

14:41

process and uh we got together with

14:43

James Lewis and we ended up writing a

14:45

good bit further about that. Um but

14:47

that's really what happens is we go

14:49

through this process of spotting this

14:52

stuff.

14:53

>> Yeah. And and the the radar analogy I

14:56

know some companies also take the idea

14:57

which by the way Thoughtworks encourages

14:59

saying make your own radar take it in

15:01

your own company. You can I think they

15:03

even like have tools around it. I really

15:04

like how Thoughtworks never said like

15:06

this is the thing for the industry. They

15:07

said this is the thing for us. This is

15:08

what we see. This is what we recommend

15:12

our team our team members or or maybe

15:14

our our clients to consider or there's

15:17

also I like that there's a hold maybe

15:19

just be beware. We're we're not seeing

15:21

great results with this and and here's

15:23

the reasons for it. And yeah, I guess

15:25

the reason it feels fresh is uh probably

15:27

a lot of work that Thork works does is

15:29

it feels cutting edge because it's all

15:31

it's all about half of it or a third of

15:33

it feels that it is around the hottest

15:35

topic right now AI LMS and and all the

15:38

techniques that people are trying to see

15:40

if they work or are the things that we

15:42

are seeing that actually starts to work.

15:43

Yeah, I mean what I mean for works has

15:45

basically got several thousand

15:46

technologists all over the world doing

15:48

projects of various kinds to all sorts

15:50

of different organizations

15:52

and the radar is a mechanism that we've

15:54

discovered is a way of getting some of

15:56

that information out of their heads and

15:58

spreading it around both internally and

16:00

to the industry as a whole. And you're

16:01

right, I it is a recommended thing for

16:03

clients to do is to try and do their own

16:05

radars. It's slightly different when

16:07

it's a client radar thing because

16:09

sometimes there it can be more of a this

16:12

is what we think you should be doing

16:14

with a bit more of a forcefulness to it

16:16

than than we would give and also they

16:19

can be a bit more choosy in the sense of

16:22

they can say yeah we're just not

16:23

interested in doing certain technologies

16:24

while for us it's a case if our clients

16:26

are doing it then we we're going to find

16:28

out about it right we have to use it

16:30

>> of course the the radar is full with a

16:32

lot of AI and LM related things because

16:34

this is a huge change in in my

16:37

professional career, it it feels by far

16:39

the biggest technology innovation

16:42

changes coming in. Looking back on your

16:45

career, what similar changes have you

16:48

seen that could compare to some extent

16:50

to AI in the technology field? I

16:53

>> it's the biggest I think for my career.

16:55

I think if we looked back at the history

16:57

of software development as a whole, the

16:59

comparable thing would be the shift from

17:01

assembly language to the very first

17:03

highle languages which is before my time

17:05

right when we when first started coming

17:07

up with cobalt and forran and the like I

17:09

would imagine that would be a similar

17:11

level of shift.

17:11

>> So you started to work with forran and

17:13

you probably knew people who were still

17:15

doing assembly or at least knew knew

17:17

some of people from that generation.

17:18

>> There was a bit of assemble around when

17:20

I was working still from from what you

17:22

picked up around that time. uh what was

17:25

that shift like in terms of mindset or

17:28

or you know like because it it was a big

17:30

change right you really needed to know

17:32

the internals of the hardware and the

17:33

instructions and the the different

17:36

>> uh I I did very little assembly at

17:38

university but it it's been very useful

17:39

because I never want to do it again

17:41

[laughter]

17:42

>> very wise but but what did you pick up

17:44

in in terms of what needed to change and

17:46

and how it changed the industry just

17:48

moving from mostly assembly to mostly

17:50

higher level languages

17:51

>> well I mean for a start as you said

17:52

things were very specific to individual

17:54

chips. You had the instructions were

17:55

different on every chip. The you know as

17:57

well things like registers where you

17:59

access memory. You had these very

18:02

convoluted ways of doing even the

18:04

simplest thing because your only

18:06

instruction was for something like move

18:08

this value from a memory location to

18:09

this register. Um and you so you've

18:11

always got to be thinking in these very

18:13

very low-level forms and even the very

18:17

relatively poor um high level language

18:19

like forran at least I can write things

18:21

like conditional statements and loops

18:23

else is in my conditional statements in

18:25

forran 4 but I can at least go if and I

18:28

can get one statement I can't do a block

18:30

of statements I have to use go-tos but

18:32

you know it's better than what you can

18:34

do in assembly right and so there's a

18:36

definite shift of moving away from the

18:38

hardware to thinking in terms of

18:39

something a bit more abstract and I

18:42

think that is a very very big shift and

18:45

then of course once I'm using forran I

18:47

can be insulated to some degree away

18:49

from the hardware I'm running on. I'm

18:51

now am I running this on a main on a

18:54

mainframe? Am I running this on a mini

18:56

computer? I mean there's there's issues

18:58

because the language is always varied a

19:00

little bit from place to place but

19:02

you've got a degree of decoupling there.

19:04

Um that was really quite significant I

19:07

think. I mean I only did it on small uh

19:10

microprocessor like units because again

19:12

it was the electronic engineering part

19:13

right so we were fairly close to the

19:15

metal anyway for some of that um but um

19:19

you you definitely had that mind shift

19:22

and I I think it's with LLMs it's a

19:25

similar degree of mind shift although as

19:28

I've you know written about it I the

19:30

interesting thing is the shift is not so

19:32

much of an increase of a level of

19:34

abstraction although there is a bit of

19:36

that the biggest part of it is the shift

19:38

from determinism to non-determinism and

19:40

suddenly you're working in a non with an

19:42

environment that's non-deterministic

19:44

which completely changes you have to

19:45

think about it Martin just talked about

19:47

how AI is the most disruptive change

19:49

since the move from assembly to highle

19:51

languages that transition wasn't just

19:53

about changing the language we use they

19:55

required entirely new tool chains

19:58

similarly AI accelerated development

20:00

isn't just about shipping faster it's

20:02

about measuring whether what you ship

20:04

actually delivers value that's where

20:06

modern experimentation infrastructure

20:07

comes in and we're presenting sponsor

20:09

stats can help. With stats, instead of

20:12

stitching together point solutions, you

20:14

get feature flags, analytics, and

20:16

session replay all using the same user

20:18

assignments and event tracking. For

20:20

example, you ship a feature to 10% of

20:22

users. As you do, the other 90%

20:25

automatically become your control group

20:26

with the same event taxonomy. You can

20:29

immediately see conversion rate

20:30

differences between groups. drill down

20:32

to see where treatment users drop off in

20:34

your funnel. Then watch session

20:35

recordings of specific users who didn't

20:37

convert to understand what went wrong.

20:39

The alternative is running jobs between

20:41

different services to sync user segments

20:43

between your feature flag service and

20:45

your analytics warehouse and then

20:47

manually linking up data that might have

20:49

different user identification logic.

20:51

It's a lot of work and it can also go

20:53

wrong. Static has a generous free tier

20:55

to get started and pro pricricing for

20:57

teams starts at $150 per month. To learn

21:00

more and get a 30-day enterprise trial,

21:02

go to stats.com/pragmatic.

21:04

And now, let's get back to the shift in

21:06

abstraction with LLMs. Can we talk about

21:09

that shift in abstraction? Because one

21:12

very naive or naive way of looking at is

21:15

saying like, well, we we've had the

21:17

three levels, right? We have assembly

21:18

where you have commands for the

21:20

hardware. You need to be intimately

21:22

aware of the hardware. We have high

21:23

level programming languages starting

21:25

with C later Java later JavaScript and

21:28

uh where you don't need to be aware of

21:30

the hardware you're aware of the logic

21:32

and what you might say as well we have a

21:34

new abstraction is you have the English

21:36

language which will you know generate

21:38

this code you're saying you don't think

21:40

it's an abstraction jump why do you

21:42

think this is

21:43

>> I think there's a bit of an abstraction

21:45

jump I think the abstraction jump

21:46

difference is smaller than the

21:47

determinism nondeterminism jump and it's

21:50

it's worth remembering one of the key

21:51

things about highle languages anguages

21:53

which I didn't mention as I was talking

21:54

about earlier on is the ability to

21:56

create your own abstractions in that

21:58

language that is particularly important

22:00

as you get to things like object

22:01

orientation towards u more um expressive

22:04

functional languages like lisp which

22:06

didn't really have so much in I mean

22:08

forran and cobalt you could do that to

22:10

some extent um because because at least

22:12

with forran you can create subutines and

22:13

build abstractions out of that but

22:15

you've got so many more tools for

22:17

building abstractions when you've got

22:19

the the abilities of more modern

22:20

languages and that ability to build

22:22

abstractions is crucial.

22:24

>> So you can build a building block inside

22:26

of the language that sets you and of

22:28

course here we have like domain driven

22:30

development later enables these things

22:32

and so on.

22:33

>> Exactly. I mean an old lisp adage is

22:35

really what you want to do is to create

22:37

your own language in lisp and then solve

22:40

your problem using the language that

22:41

you've created. And I think that way of

22:43

thinking is a good way of thinking in

22:44

any programming language. you're both

22:46

solving the problem and creating a

22:49

language to to describe the kinds of

22:50

problems you're trying to solve in. And

22:52

if you can balance those two nicely,

22:55

that is what leads to very maintainable

22:57

and flexible code. So the building of

23:00

abstractions that's I think to me a key

23:03

element of high level languages and AI

23:06

helps us a little bit in that because we

23:08

can build abstractions a bit more easily

23:10

a bit more fluidly but we have this

23:12

problem and now we're talking about

23:13

non-deterministic implementations of

23:15

those abstractions which is an issue and

23:19

we've got to sort of learn a whole new

23:21

set of balancing tricks um to get around

23:24

that. My colleague Unmesh Jooshi has

23:27

been has written a couple of things um

23:29

that I really been really enjoying about

23:31

his thinking about how because he's

23:33

really pushing this using the LLM to

23:37

co-build an abstraction and then using

23:40

the abstraction to talk more effectively

23:42

to the LLM and that I'm finding really

23:45

really interesting way of thinking about

23:47

how he's working with that because he's

23:49

really pushing that direction. There's a

23:51

a thing I read in and I can't remember

23:54

the book off the top of my head. We'll

23:55

have to dig it out later that talked

23:58

about how apparently if you can describe

24:01

to an LLM a whole load of chess matches

24:04

and describe it just in plain English

24:06

and the LLM when you do that the LLM

24:09

can't really understand how to play

24:10

chess. But if you take those same chess

24:12

matches and describe the LLM to those

24:14

chess matches in chess notation then it

24:17

can. And I thought that was really

24:19

interesting that you that by obviously

24:22

you're shrinking down the the token size

24:24

because you've but you're also using a

24:26

rigorous a much more rigorous notation

24:29

to describe the problem. So maybe that's

24:31

an angle of how we use LLM. What we have

24:33

to come up is a rigorous way of speaking

24:36

and we can get more traction that way.

24:39

And of course that has great parallels

24:41

in with the ideas of domain driven

24:43

design in ubiquitous languages and also

24:45

some of the stuff that I was working on

24:47

a decade or so ago around domain

24:48

specific languages and language

24:50

workbenches. So I there's some

24:52

fascinating stuff around there that be

24:54

interesting to see how that plays out.

24:55

>> Yeah. Yeah. And I guess is this the

24:57

first time we're seeing a tool that is

25:00

so wide in software engineering that is

25:01

nondeterministic because we did have

25:03

neural nets for example in the past they

25:05

were not but they were more I feel the

25:07

application of those was a lot more kind

25:10

of niche and not not everywhere now

25:11

every single developer is I mean if

25:14

you're using code generation you are

25:15

using non-deterministic things of course

25:17

we're integrating them left and right

25:19

trying out where it works. Is is it fair

25:21

to say that this is probably the first

25:22

time we're facing this challenge of

25:25

deterministic computers which we know

25:26

very well. We know their their limits

25:28

and all those things and of course

25:30

there's some race conditions and some

25:31

exotic things but now we have

25:34

>> exactly problem to solve for

25:36

>> it's a whole new way of thinking. It's

25:38

got some interesting parallels to other

25:40

forms of engineering. Other forms of

25:42

engineering you think in terms of

25:43

tolerances and my wife's a structural

25:45

engineer right? She always thinks in

25:47

terms of what are the tolerances? How

25:48

much how much extra stuff do I have to

25:51

do beyond what the math tells me because

25:53

I need it for tolerances because yeah, I

25:55

mean I mostly know what the properties

25:56

of wood or concrete or steel are, but

25:58

I've got to, you know, go for the worst

26:00

case. We need probably some of that kind

26:03

of thinking ourselves. What are the

26:04

tolerances of the non-determinism that

26:07

we have to deal with and realizing that

26:09

we can't skate too close to the edge

26:10

because otherwise we're going to have

26:12

some bridges collapsing. I I suspect

26:14

we're going to do that particularly on

26:15

the security side. We're going to have

26:16

some noticeable crashes. I I fear um

26:19

because people have got skated way too

26:21

close to the edge in terms of the

26:22

non-determinism of the tools they're

26:24

using.

26:24

>> Oh, for sure. But before we go into

26:27

where we could crash, what are some

26:29

either new workflows or new software

26:31

engineuring approaches that you've kind

26:33

of observed or or aware of that that

26:36

sound kind of exciting that we we can

26:38

now now do with LMS or at least we can

26:39

try to give them a goal that would have

26:41

been impossible with, you know, our old

26:43

deterministic toolkit,

26:44

>> right? One area is one one that has got

26:47

lots of attention already is the being

26:49

able to knock up a prototype in a matter

26:51

of days. That's just way more than you

26:53

could have done previously. So, this is

26:55

the vibe coding thing. Um, but it's it's

26:59

more than just that because it's also an

27:01

ability to try explorations. Um, people

27:04

can go, hey, I not really quite sure

27:06

what to do with this, but I can spend a

27:07

couple of days exploring the idea much

27:10

much more rapidly than I could have

27:12

before. And so for throwaway

27:14

explorations for disposable little tools

27:17

and things of that kind um and including

27:20

stuff by people who aren't don't think

27:22

of themselves as software developers. I

27:23

think there's a whole area and you know

27:26

we can with good reason be very

27:28

suspicious of taking that too far

27:30

because there's a danger there. But we

27:33

also realize that as long as you treat

27:35

that within its right bounds, that's a

27:38

very valuable area and I think we'll

27:40

that's that's really good on a

27:41

completely opposite end of scale. U one

27:44

area that's really interesting is

27:45

helping to understand existing legacy

27:47

systems. So my colleagues have have put

27:50

a good bit of work in this um year or

27:52

two ago. And basically the idea is you

27:57

take the code itself um do the the

28:03

essentially the the semantic analysis on

28:05

on it. populate a graph database

28:07

essentially with that kind of

28:09

information and then use that graph

28:11

database as kind of in a ragl like style

28:14

and you can begin to interrogate and say

28:15

well what happens to this piece of data

28:17

which bits of code touch this data as it

28:19

flows through the program incredibly

28:21

effective and in fact if I remember

28:23

correctly we put actually understanding

28:25

of legacy systems into the adopt ring

28:27

because we said yeah you if you're if

28:29

you're doing any work with legacy

28:30

systems you should be using LLMs in some

28:32

way to help you understand

28:34

>> so so in this ring in the thought force

28:35

radar the the fewest things are in the

28:38

adopt adopt says we strongly suggest

28:40

that you look at this at least you know

28:42

thought works themselves look at it

28:43

there's only four items and one of them

28:46

is yes uh to to to use genai to

28:48

understand legacy code which to me tells

28:51

that you have seen great success which

28:52

is it's refreshing to hear by the way I

28:55

did not hear this as much and I guess it

28:57

helps at thought works I'm sure you have

29:00

to work with a lot of

29:01

>> well I mean it came from the fact that

29:02

some of the folks who had done some

29:03

really interesting work on on legacy

29:05

code stuff um happened to bump into and

29:08

look at this and say, "Hey, let's try

29:10

this out." And they found it to be very

29:11

effective and it also has been an

29:13

ongoing interest for many of us at

29:15

Thoughtworks because we have to do it

29:17

all the time. And how do you effectively

29:20

work with the the modernization of

29:23

legacy systems because every big company

29:25

that you know is older than a few years

29:27

has got this problem. Y

29:29

>> and they have it in spades

29:31

>> and then especially just simple things

29:33

people leave right as as as simple as

29:35

that and having uh Gen AI that can help

29:39

you make some progress is it's already

29:40

better than making no progress.

29:42

>> Exactly. So those are two areas that are

29:45

clearly um right away I would say those

29:48

are there's great success for using LLMs

29:51

and then there's the areas that we're

29:53

still figuring out. I mean, I'm

29:55

certainly seeing some interest more in

29:57

more and more interesting stuff as

29:58

people try to figure out how to work

30:00

with an LLM on a one-to-one basis to

30:02

build decent quality software. We're

30:05

seeing some definite signs of how you

30:07

you got to work with very thin, rapid

30:11

slices, small slices. You've got to

30:13

treat every slice as a PR from a rather

30:16

dodgy collaborator who's very productive

30:18

in the lines of code sense of

30:20

productivity. Um, but you know, you

30:22

can't trust a thing that they're they're

30:24

doing. So, you've got to review

30:25

everything very carefully when you play

30:27

with the genie like that. The genie is

30:29

GK Kent's term for it. Or or Dusty the

30:32

uh sort of the anthropomorphic donkey,

30:34

which is how Bita

30:36

I love her take.

30:38

>> Yeah. But using it well, you can

30:40

actually definitely get some speed up in

30:43

your process. It's not the kind of speed

30:45

up that the the the advocates are

30:47

talking about, but it is non-trivial.

30:50

It's certainly worth learning how to to

30:52

make some use of this and it's folks

30:54

like Burgita or Kent or um Steve Jagg

30:57

those are the those are the folks I

30:59

think who are pushing this. We're still

31:01

I think learning how to do this.

31:03

>> Everyone is learning it. Absolutely.

31:05

>> And it's still the question and most of

31:07

the experience we're gaining is building

31:09

in a green field environment. So that

31:11

leaves big questions in terms of a the

31:13

brownfield environment. Well, we know

31:15

that that LLMs can help us understand

31:17

legacy code. Can they help us modify

31:20

legacy code in a safe way? [screaming]

31:24

It's still a question. I mean, I was

31:25

just chatting with with James Lewis

31:27

because he's in town as well this

31:28

morning and he was commenting about he

31:30

was playing with cursor and he's been

31:32

was just building something like this

31:33

and he said, "Oh, I I wanted to change

31:35

the name of a class um in a not too big

31:38

program and he sets it off to do that."

31:41

and comes back an hour and a half later

31:43

and has used you know 10% of his monthly

31:45

allocation of tokens and all he's doing

31:47

is changing the name of a class

31:48

>> and and we actually in IDs we actually

31:51

have functionality which which I I still

31:54

remember when was cutting edge this was

31:55

probably 20 years ago when Visual Studio

31:58

it wasn't even Visual Studio it was Jet

31:59

Brains who came out with an extension

32:01

called ReSharper which helped refactor

32:04

code and people paid serious money this

32:06

was like $200 per year or something to

32:08

get this plugin and now you could right

32:10

click and say rename class and it went

32:12

and it built that the graph behind the

32:15

scenes somehow it went and changed you

32:17

could rename variables and again this

32:18

was a a huge deal in fact uh in Xcode

32:21

Apple's developer uh plat uh ID for a

32:25

while when swift came out you couldn't

32:26

do these refactors and it was you know

32:28

people were like so it's interesting how

32:30

some things are easy we've solved it and

32:33

LMS are not very very efficient at not

32:35

very good at it

32:36

>> y

32:36

>> yes and then I mean he did that just to

32:38

see what it was going to be like right

32:40

cuz he knows you can just I mean we've

32:42

had this for a long technology for a

32:43

long time so it's kind of amusing I mean

32:45

but it's also to the point that when

32:48

working with an existing system and

32:49

modifying an existing system we that's

32:52

still really up in the air and then

32:55

another area that's really up in the air

32:56

both green field and brownfield is what

32:59

happens when you've got a team of people

33:00

because most software has been built by

33:03

teams and will continue to be built with

33:04

teams because even if and I don't think

33:06

it will um AI makes us order of

33:09

magnitude more productive

33:10

We still need a team of 10 people to

33:12

build what a team of 100 people needed

33:14

to build and we will always want this

33:16

stuff. There's no sign of demand

33:19

dropping for software. So we will always

33:21

want teams and then the question is of

33:23

course how do we best operate with AI in

33:26

the team environment and we're still

33:28

trying to figure that one out as well.

33:30

So there's lots of questions we got some

33:32

an some answers some beginnings of

33:34

answers and it's just a fascinating time

33:36

to watch it all.

33:37

>> You mentioned vibe coding. what what is

33:39

your understanding and take on vibe

33:41

coding?

33:42

>> Well, when I use the term vibe coding, I

33:44

used I try to go back to the original

33:46

term which is basically you don't look

33:47

at the output code at all. Maybe, you

33:50

know, take a glance at it out of

33:51

curiosity, but you you really don't care

33:53

and maybe you don't look don't know what

33:55

you're doing because you don't you've

33:56

got no knowledge of programming. It's

33:58

just spitting out stuff for you. So,

34:00

that's my how I define vibe coding. Um,

34:04

and my my take on it is kind of as I've

34:06

indicated, I think it it's good for

34:08

explorations. It's good for throwaways,

34:10

disposable stuff. Um, but you don't want

34:13

to be using it for anything that's going

34:14

to have any long-term capability because

34:17

it's I mean, again, just this is a a

34:21

silly anecdote, but I was working um my

34:24

colleague Unesh, he's just wrote

34:26

something that we published yesterday.

34:29

And uh as part of doing this, we we

34:31

create this little pseudo graph of

34:33

capability over time kind of thing,

34:34

which is, you know, one of those silly

34:36

little pseudo graphs that helps

34:38

illustrate a point. And he asked the uh

34:41

at LLM to create this. He described the

34:43

curves he wanted and produ came up with

34:45

and put it up there. And I and he he

34:47

committed it to our repo. And I was

34:49

looking at it and thinking, yeah, that's

34:51

a good good enough graph. I want to

34:52

tweak it a little bit. I want to, you

34:53

know, the labels are a bit far away from

34:55

the lines they're labeling, so I'd like

34:56

to bring them closer. So I open up the

34:58

SVG of what the LLB has produced and oh

35:04

I mean it was astonishingly how

35:06

complicated and convoluted it was for

35:09

something that I had written the

35:10

previous one myself and I knew it was

35:13

you know a dozen lines of SVG and SVG is

35:16

not exactly a compact language right

35:18

because it's XML but this thing was

35:21

gobsmackingly um weird and I mean that's

35:23

the thing when you vibe code stuff it's

35:26

going to produce god knows what and

35:27

often it really is and you cannot then

35:30

tweak it a little bit.

35:32

>> You have to basically throw it away and

35:35

hope that you can generate whatever it

35:37

is you're trying to tweak. And the other

35:39

thing of course that's the difference

35:40

and this is the the heart of the article

35:42

that Unmesh wrote um that we published

35:45

yesterday is when you're using vibe

35:48

coding in this kind of way, you're

35:49

actually removing a very important part

35:52

of of something which is the learning

35:53

loop. If you're not looking at the

35:55

output, you're not learning. And the

35:57

thing is that so much of what we do is

36:00

we come up with ideas, we try them out

36:02

on the computer with this constant back

36:05

and forth between what the computer does

36:07

with what we're thinking. We're

36:08

constantly going through that learning

36:10

loop program approach and unash's point,

36:12

which I think is absolutely true, is you

36:14

cannot shortcut that process. And what

36:16

LLM do, they just kind of skim over all

36:18

of that and you're not learning. And

36:20

when you're not learning, that means

36:22

that when you produce something, you

36:23

don't know how to tweak it and modify it

36:25

and evolve it and grow it. All you can

36:27

do is nuke it from orbit and start

36:29

again. The other thing I've done

36:31

occasionally with vibe coding is oh vibe

36:33

coding as a consulting company, so many

36:35

problems to fix

36:38

for sure. But you are right on the

36:41

learning the the the learning side both

36:43

on on vibe coding and AI. One one thing

36:45

that I'm noticing on on myself is it is

36:48

so easy to you know give a prompt you

36:51

get a bunch of output and you know you

36:54

should be reviewing a lot of this code

36:57

either yourself or or in a code review

36:59

but what I'm seeing on myself is I'm at

37:02

some point I start to get a bit tired

37:03

and I just let it let let it go and this

37:05

is also what I'm hearing when talking

37:06

with software engineers is the ones who

37:08

are working at companies which are

37:10

adopting these tools which is pretty

37:11

much every company it's there's a lot

37:13

more code going out there, a lot more

37:15

code to review, and [clears throat]

37:17

they're asking, "How can I be vigorous

37:19

at code reviews when there's just more

37:22

and more of them than before?" Have you

37:24

seen approaches that help people, both

37:28

less experienced people and also more

37:29

experienced engineers, keep learning

37:31

with these tools? Just approaches that

37:33

seem promising.

37:34

>> Not a huge amount. Um I do I am very

37:39

much paying attention to what Unmesh um

37:42

is doing with this because his approach

37:44

very much is that notion of let's try

37:47

and build a language to talk to the LLM

37:51

with work with the LLM to produce a

37:53

language to communicate to the LLM more

37:55

precisely and carefully what it is that

37:57

we're looking for. And I do feel that is

38:00

a promising and very much a more

38:01

promising line attack. make sure do we

38:03

create our own specialized language for

38:06

working with whatever problem that we're

38:08

working on and I think that actually

38:11

brings another um we're talking about

38:13

things we know LLM are useful for

38:15

another thing and this is again

38:16

something unme has highlighted is

38:19

understanding an unfamiliar environment

38:21

again I was chatting with James he was

38:23

working with um he's he's working on a

38:25

Mac with C which is not a language he's

38:27

terribly familiar with using this game

38:30

engine called God.

38:32

>> Godo. Yeah.

38:33

>> Yeah. Go.

38:34

>> And he doesn't know anything about this,

38:35

right? But with the LLM, he can learn a

38:38

bit about it because he can try things

38:40

out. And if you take it with that

38:42

exploring sense, and I mean, I I mean, I

38:44

can't remember. I've I've certainly got

38:46

to the point where I'm typing in to the

38:48

L. Oh, well, how do I do so and so in R

38:50

that I've I've done 20 times, but I

38:52

still can't remember how to do it. And

38:54

you and exploring and Umash makes a

38:57

point again setting up initial

38:58

environments. you know, give me a

39:00

starting project, a sk a sample starting

39:02

skeleton project so I can just get

39:04

moving. Um, and so that kind of

39:07

exploratory stuff and helping in an

39:10

unfamiliar environment and just learning

39:13

your way around an unfamiliar set of

39:14

APIs and and coding ideas and the like.

39:18

it can be quite handy for I

39:20

>> I wonder if this is not all that new in

39:22

the sense that I remember you know one

39:23

of the last kind of big productivity

39:25

boosts in the industry uh about 10 or 15

39:29

years ago was Stack Overflow appearing.

39:32

So before Stack Overflow when you

39:33

Googled for questions you bumped into

39:35

this site called experts to change and

39:37

there was the question and you had to

39:39

pay money to see the answer or you had

39:41

to pay money to get an expert to answer

39:43

but usually there was nothing behind it

39:45

even if you paid and most of us I was a

39:47

college student I just didn't pay right

39:49

>> so you just couldn't find the answer and

39:50

you were all frustrated but then Stack

39:51

Overflow came along and suddenly you had

39:53

code snippets that you could copy and of

39:55

course what a lot of young people or

39:57

like less experienced developers even

40:00

like myself did is you just take the

40:02

code, put it in there and see if it

40:04

works. As you got to more experienced

40:06

engineers or developers, you started to

40:08

tell the junior engineers like you need

40:09

to understand that first like or even if

40:11

it works, you need to understand why it

40:12

works. You need to you should read the

40:14

code. And I I feel we've been there was

40:16

a few years where where we were going

40:18

back and forth of people mindlessly

40:20

copying pasting uh snippets. There were

40:23

problems with uh I think there was a

40:25

question about email validation and a

40:27

top voted answer was not entirely

40:29

correct. And turns out that a good part

40:32

of software and developers just use that

40:34

one.

40:35

>> I feel we kind of been around this

40:37

already.

40:37

>> Yeah, it's a similar kind of thing, but

40:39

>> maybe at a smaller scale.

40:40

>> Yeah. But even more boosted and on

40:42

steroids and with the question of, you

40:44

know, how how are things going to

40:46

populate in the future because who's

40:48

going to be writing Stack Overflow

40:49

answers anymore?

40:50

>> Yeah. So, I I I wonder if what we're

40:53

getting to is like you need to care

40:54

about the craft. you you need to

40:56

understand what the LLM's output is and

40:59

it's there to help you and if you're not

41:01

doing it I mean like you should but but

41:03

if you're not you'll eventually be no

41:06

better than someone just prompting it

41:08

mindlessly.

41:09

>> Exactly. Yeah. I mean it I mean I have

41:11

no problem with taking something from

41:13

the LLM and stick putting it in to see

41:16

if it works but then once you've done

41:18

that understand why it works as you say

41:20

and also look at it and say is this

41:23

really structured the way I'd like it to

41:24

be don't be afraid to refactor it don't

41:27

be afraid to put it in and then of

41:29

course the testing combo anything you

41:31

put in that works you need to have a

41:33

test for and and if you constantly are

41:36

doing that back and forth with the

41:37

testing process Martin Fowler was just

41:39

talking about the importance of testing

41:40

when working with LMS and in general

41:42

when building quality software. Speaking

41:45

of the quality software, I need to

41:46

mention our season sponsor, Linear. I

41:49

recently sat on one of Linear's internal

41:50

weekly meetings called Quality

41:52

Wednesdays, and I was completely blown

41:53

away. This was a 30-minute meeting that

41:56

happens weekly. In this session, the

41:57

team went through 17 different quality

41:59

improvements in half an hour. 17. It's a

42:03

fast and super efficient meeting. Boom,

42:05

boom, boom. Every developer shows a

42:07

quality improvement or performance fix

42:08

that they made that week. And it can be

42:10

anything from massive backend

42:11

performance wins that save thousands of

42:13

dollars to the tiniest UI polish that

42:15

most people wouldn't even notice. For

42:17

example, one fix was fixing the height

42:19

of the composer window very slightly

42:21

changing when you entered the new line.

42:23

Another one was fixing this one pixel

42:25

misalignment. Can you imagine caring

42:28

that much about about the details? After

42:30

doing this every single week for years,

42:32

their entire engineering team has

42:33

developed this incredible eye for

42:35

quality. They catch these issues before

42:37

they even ship. Now, one of their

42:38

engineers told me that since they

42:40

trained this muscle over time, they

42:42

start noticing patterns while building

42:43

stuff. So, fewer of these paper cuts

42:46

ship in the first place. This is why

42:47

Linear feels so different from other

42:49

issue tracking and project management

42:50

tools. Thousands of tiny improvements do

42:52

add up and you feel the difference. When

42:54

you use Linear, you're experiencing the

42:56

results of literally hundreds of these

42:58

quality Wednesday sessions. Thomas,

43:00

their CTO, recently wrote a piece about

43:02

this week ritual, and I'll link it in

43:04

the show notes below. If your team cares

43:06

about craftsmanship and building

43:07

products that people actually love

43:09

using, check out Linear at

43:10

linear.app/pragmatic.

43:13

Because honestly, after seeing how they

43:14

work up close, I understand why so many

43:16

of the best engineering teams are

43:18

switching. And now, let's get back to

43:20

the importance of testing when working

43:22

with LLMs. I mean, one of the people I I

43:25

particularly uh focus on in this space

43:28

is Simon Willis and something he

43:30

stresses constantly is the importance of

43:31

tests, but testing is a huge deal to to

43:34

him and being able to make these things

43:36

work. And of course, you know, Bea is is

43:38

from Fort Works. We're very much an

43:40

extreme programming company. So, she's

43:42

steeped in in in testing as well. So,

43:44

she will say the same thing. You got to

43:47

really focus a lot on making sure that

43:48

the tests work together. And of course,

43:50

this is where the LLM struggle because

43:52

you tell them to do the tests and I'm

43:55

I'm only hearing problems [laughter]

43:58

or experiencing them myself like when

44:00

the LLM tells me, "Oh, and I ran all the

44:02

tests. Everything's fine. You got npm

44:03

test five failures." H yeah, I I I see

44:07

some improvements there by the way with

44:08

with clock code also like other agents.

44:11

But yes, it's the nondeterministic

44:13

angle. Sometimes they can lie to you,

44:15

which is weird, right? I'm I'm still not

44:17

>> They do lie to you all the time. In

44:19

fact, if if if they were truly a junior

44:21

developer, which is how sometimes people

44:23

like to say they should be

44:24

characterized, I would be having some

44:26

words with HR.

44:27

>> Yeah. Like I the other day I just had

44:29

this really weird experience, which is

44:30

the simplest thing. I have a

44:32

configuration file where I add just new

44:34

items, a new JSON, you know, blob, and I

44:36

I put the date of when I added it just

44:39

in the comments saying added on, you

44:40

know, October 2nd, added on November

44:42

1st. It's always a current date. And I

44:45

told the LM, can you please add this

44:47

configuration thing and add the current

44:48

date? And it added it and it added it

44:51

just copied the last date. And I said

44:53

that is not today's date. I said, oh,

44:55

I'm so sorry. You know, let me correct

44:57

that for you. And it put yesterday's

44:58

date. [laughter]

45:01

And and I I feel you need to get this

45:04

experience to see that it can gaslight

45:07

you for a simple thing of today's date

45:09

which uh you know you know you could

45:11

call a function whatnot but it's it's

45:13

down to which who knows which model I

45:15

was using how that model works whether

45:18

the company creating it is optimizing

45:20

for token usage or not etc etc etc so

45:23

like in the end even for the simplest

45:25

things you are as as a when you're a

45:28

professional working on important stuff

45:29

you should not trust Yeah, absolutely.

45:31

Never. Yeah, it's got to you've got to

45:34

don't trust, but do verify.

45:36

>> Verify. Yes. Uh, speaking with

45:38

developers at at Thought Works and and

45:41

the people you're you're chatting with,

45:43

what areas

45:45

that they are successfully using LM's

45:47

day-to-day though, like we we we did

45:49

mention just right now testing. We we

45:52

also mentioned things like prototyping,

45:54

but do you see some other things where

45:55

starting to become a bit of a routine?

45:56

Like if if I'm doing this thing, let me

45:58

reach for an LLM. it can probably help

46:00

me

46:01

>> that yeah I mean I'm I've mentioned many

46:04

of that right the prototyping the legacy

46:07

code understanding oh yes the fact that

46:10

um you can use it to explore um new

46:12

technology areas um potentially even new

46:15

domains as long as you you know you

46:18

trust it significantly less than you

46:20

would trust Wikipedia 10 years ago those

46:22

are the things that I'm hearing so far

46:25

>> yeah one interesting area that Burgetta

46:28

is exploring ing is spec development.

46:31

There's this idea of what well you know

46:33

LMS have have their own limitations but

46:35

what if we define pretty well what we

46:38

want it to do and give it this like

46:40

really good specification and you know

46:42

it can run with it it can run long it

46:44

had iterations and so on. What is your

46:46

take on this and do you have a bit of a

46:48

dja vu because we we've heard this once

46:50

right your your career started around

46:53

this thing called waterfall development.

46:55

So how how how are you seeing it similar

46:56

but also different this time? Well, the

46:58

the this the similar to waterfall is

47:01

where people try and say let's create a

47:03

large amount of spec and not pay much

47:06

attention to the code. And here I mean

47:08

whether you whether you talk again this

47:10

is what you mean by speciment is it so

47:12

much focusing on that or is it doing

47:16

small bits of spec do the tight loop I

47:19

mean to me the key thing is you want to

47:21

you want to avoid the the waterfall

47:23

problem of trying to build the whole

47:25

spec first. It's got to be do a do the

47:28

smallest amount of spec you can probably

47:31

you can possibly get to make some

47:32

forward progress. Cycle with that, build

47:35

it, get it tested, get it in production

47:37

if possible, and then cycle with these

47:39

thin slices. What role a spec may play

47:41

to drive in either case could be argued

47:44

to be a spec form of spec driven

47:46

development. But to me, what matters is

47:47

the tight the tight loops, the the thin

47:49

slices, that kind of thing.

47:51

>> And I know big definitely agrees on that

47:53

point. coming because she and you have

47:55

to be the human in the loop verifying

47:57

every time that's that's clearly crucial

47:59

where the spectrum and development then

48:01

ties in interesting again it comes back

48:03

to this thinking of building domain

48:05

languages and domain specific languages

48:07

and things of that kind can we craft

48:09

some kind of more rigorous spec to talk

48:13

about and that's you know I mentioned

48:15

what the wood mesh was doing there using

48:17

it to build an abstraction because

48:18

essentially what we're saying is that it

48:21

gives us the ability to build and

48:22

express abstractions in a slightly more

48:25

fluid form than we would be able to do

48:27

if we were building them purely within

48:29

the codebase itself. But we still don't

48:31

want them to deviate too much from the

48:33

codebase, right? We still want the

48:35

ubiquitous language notion that it's the

48:37

same language in our head as is in the

48:39

code and we're seeing the same names and

48:41

they're doing the same kinds of things.

48:43

The structure is clearly parallel, but

48:45

obviously the way we think is a bit more

48:47

flexible than the way the code can be.

48:50

and then you know can we blur that

48:52

boundary a bit by using the LLM as a

48:55

tool in that area. So that's the area

48:57

that I think is interesting in in that

48:59

direction. It

49:00

>> it's interesting as new because I I feel

49:02

we've never been able to use language as

49:05

close to representing code ever or or

49:07

like business logic and this is very

49:10

new.

49:10

>> Yeah. Although again people I mean there

49:12

are plenty of people who take that kind

49:14

of DSL like thinking into their

49:16

programming and I would to I know people

49:19

who would say yeah I would I would get

49:21

to the point where I could write certain

49:23

parts of the business logic in you know

49:25

a programming language like say Ruby and

49:28

show it to a domain expert and they

49:29

could understand it. They wouldn't feel

49:31

the ability to be able to write it

49:33

themselves but they could understand it

49:35

enough to point out whether what was

49:36

wrong or what was right in there. And

49:38

this is just programming code, but it it

49:41

that requires a certain degree of the

49:44

way you go about projecting the language

49:46

in order to be able to get that kind of

49:48

fluidity. And so it's but that kind of

49:51

thinking like trying to make an internal

49:53

DSL of a programming language or maybe

49:56

building your own external DSL

49:58

DSL meaning domain specific language

50:00

like if you're working with accountants,

50:02

you're going to have the terms that they

50:04

they use, the way they use it and so on.

50:07

>> Yeah. And what you're trying to do, of

50:09

course, is create that communication

50:11

route where pe where a non-programmer

50:14

can at least read what's going on and

50:17

understand it enough to to be able to

50:20

find what's wrong about it and and to

50:22

suggest changes which may not be

50:24

syntactically correct, but you can

50:26

easily fix them because you as a

50:28

programmer, you can see how to do that.

50:29

And that's the kind of goal and some

50:31

people have reached that goal in some

50:33

places. So the interesting thing is

50:35

whether whether LLMs will enable us to

50:38

make more progress in that direction and

50:40

see that happening more widely

50:41

>> and I guess this must be I'm just

50:43

assuming correct me if I'm wrong this

50:44

must be especially important in

50:45

enterprises these very large companies

50:47

where software developers are not the

50:49

majority of people let's say they're 10

50:50

or 20% of staff and there's going to be

50:53

accounting marketing special business

50:56

divisions who all want software written

50:58

for them and they know what they want

51:01

and historically there's been layers of

51:03

people translating this may that be the

51:05

project manager the technical pro etc.

51:08

So you're saying that there could be a

51:10

pretty interesting opportunity or or

51:12

just an experiment uh with LM that maybe

51:14

we can we can make this a bit easier for

51:16

both sides.

51:17

>> That is the world I'm most familiar with

51:19

right is that world. I mean um one I

51:22

mean I I my sense is you're very

51:25

familiar with the big tech company and

51:27

the startup worlds but this corporate

51:30

enterprise world of course is a whole

51:32

different kettle of fish because exactly

51:33

the reason that you said suddenly the

51:35

software developers are a small part of

51:37

the picture and there's very complex

51:39

business things going on that we've got

51:41

to somehow interface in and of course

51:43

also there's usually a much worse legacy

51:45

system problem as well. Um and and

51:48

there's going to be regulation, there's

51:49

going to be a history, there's going to

51:51

be exceptions because of all the

51:54

knowledge. I I think we can all just

51:55

think of banks of all the things cuz

51:56

there's there's perfect storm, right?

51:58

They have regulation that changes all

52:00

the time. They have incidents that they

52:01

want to avoid going future. They'll have

52:03

special VIP, I don't know, accounts or

52:05

whatever that they'll want to do. And of

52:07

course, they have all these business

52:08

units that all know their own rules and

52:11

and frameworks. And they've been around

52:12

since before technology. Some of some of

52:14

the banks have been around for, you

52:15

know, 100 plus years.

52:17

>> Yeah. And remember, the banks tend to be

52:18

more technological advanced than most

52:21

other corporations in software.

52:23

[laughter]

52:23

>> That's a good one.

52:25

>> You're looking at the at the good bit

52:27

when you're talking about banks.

52:29

[laughter]

52:29

>> You you have worked with some with some

52:31

of the less advanced folks as well.

52:33

>> I mean, yeah, retailers, airlines,

52:36

government agencies, things of that

52:37

kind. I mean, it was interesting. I was

52:39

chatting with some folks working in the

52:41

Federal Reserve in Boston and uh you

52:44

know they're

52:46

they have to be extremely cautious. They

52:49

are not allowed to touch LLMs at the

52:50

moment because

52:52

you know the the consequences of error

52:55

when you're dealing with you know a

52:57

major um government banking organization

52:59

are pretty damn serious. So you've got

53:01

to be really really careful about that

53:03

kind of stuff. and and yeah, their

53:05

constraints are very different and and

53:07

it it it brought to mind this there's a

53:10

an adage that says that to understand

53:12

how the software development

53:13

organization works, you have to look at

53:15

the core business of the organization

53:17

and see what they do. Interesting. I I

53:20

was at this agile conference for the

53:21

Federal Reserve in Boston and they took

53:24

me a tour of the Federal Reserve, but

53:25

where they handle the money. And so I

53:28

saw the places where they bring in the

53:31

the notes that have been brought in from

53:33

the banks and they kind of clean them

53:34

and count them and all the rest of it

53:36

and and send out the stuff again. And

53:38

you look at the degree of care and

53:41

control that they go through. own as you

53:42

could imagine. I mean, when you're

53:44

bringing in huge wges of cash and it has

53:48

to be sorted and counted and all the

53:50

rest of it, the controls have to be

53:52

really really strenuous. And you look at

53:54

that and you look at the care with which

53:56

they do all of this and you say, "Yep, I

53:58

can see why in the software development

54:00

side that mindset percolates because

54:04

they are used to the fact that they

54:05

really have to be c careful about every

54:07

little thing here." A lot of

54:09

corporations of course have that similar

54:10

notion. you're you're involved in an

54:12

airline, you are really concerned about

54:14

safety. You're really concerned about um

54:16

getting people to their death that

54:18

affects your whole way of thinking or

54:19

ought to and it does and I guess this is

54:22

a reason we are clearly seeing we always

54:25

see a divide in technology usage because

54:27

you have the startups which is a group

54:29

of people they just raised some funding

54:30

or they have no funding. They have

54:32

nothing to lose. They have they have

54:33

zero customers. They have everything to

54:35

gain. They they need to jump on the

54:37

latest bandwagon. They want to try out

54:39

the latest technologies. oftentimes

54:41

build on top of them or sell tools to

54:42

use the latest technology and they're

54:44

here to break the rules and you know

54:46

midway when there's when you start to

54:49

have a few customers in a business

54:50

you're starting to be a bit more careful

54:52

and of course you know 50 or 70 years

54:54

down the road when the founders have

54:56

gone and and now it's a large enterprise

54:59

you will just have different risk

55:00

tolerance right

55:01

>> exactly yeah

55:02

>> but what what what what I find

55:04

fascinating talking about this that I'm

55:07

unsure if there has been any new

55:09

technology that has been so rapidly

55:11

adopted everywhere. You mentioned that

55:12

let's say the Federal Reserve or some

55:14

other government organizations might say

55:17

let's not touch this yet but they are

55:19

also evaluating it sounds like it. So if

55:21

they're you know they're the one of the

55:22

most I guess behind in the technology

55:25

curve for very good reason they're

55:26

already aware of it or using it which

55:28

probably means that it's everywhere now.

55:30

Oh, it is. I mean, it is. I mean, we see

55:32

it all over the place, but again, with

55:34

that with with more caution in the

55:37

enterprise world where they're saying,

55:38

"Yeah, we we also see the dangers here."

55:40

>> And then you're you're seeing kind of

55:41

more nimble companies that you work with

55:43

and the more enterprise focused. What

55:44

would you say is the biggest difference

55:46

between their relationship of of AI uh

55:49

their approach? Is it is it this caution

55:51

or are there other characteristics that

55:53

the the the big more traditional less

55:56

more riskaverse companies approach it

55:58

differently? The the important thing to

56:00

remember with any of these big

56:01

enterprises is they are not monolithic.

56:04

So it'll be small portions of these

56:06

companies can be very adventurous and

56:08

other portions can be extremely not so.

56:12

And so what you'll see is small I mean

56:14

like you know when I when I started at

56:15

cheetah lightwe right and I was in this

56:17

little bit that was being very very

56:19

aggressively doing really wacky things,

56:21

right? I mean you'll find that in any

56:23

any big organization you'll find some

56:25

small bits doing some stuff. Um, and so

56:28

it's really the variation within an

56:30

enterprise often is bigger than the

56:32

variation between enterprises.

56:34

>> Good to keep that in mind. So speaking

56:37

about refactoring, LM are very good at

56:40

refactoring and and you you've written

56:41

the book back in 1999 called

56:43

Refactoring. This is now the second

56:45

edition which 20 years later it's it's

56:47

been refreshed. And it's it's actually a

56:49

really detailed book going through

56:51

different code smells uh that could show

56:54

that where the code is, techniques of of

56:56

refactoring it. On the first page

56:58

already, it has I really like this. It

57:00

has a list of refactorings on I don't

57:02

know how the publisher printed this

57:04

because it's so so unusual, but it's

57:06

it's it's right here on the table of

57:07

contents. Why did you decide to write

57:09

this book back in 1999? Can you bring us

57:11

back on what the envir environment was

57:13

like and what was the impact of the

57:15

first edition of this book? Okay. So, I

57:18

first came across refactoring at

57:20

Chrysler. Yeah. When I was working with

57:22

Kemp Beck, right early on in the

57:24

project. Um, we we I remember

57:29

in my hotel room, the courtyard or

57:31

whatever in Detroit, him showing me how

57:34

he would refactor some small talk code.

57:37

And what I mean, I was always someone

57:40

who liked going back to my something I'd

57:43

already written and make it more

57:44

understandable. I've always cared a lot

57:46

about something being comprehensible.

57:48

That's true in my pros writing and in my

57:51

software writing. And so that I knew,

57:53

but what he was doing was taking these

57:55

tiny little steps and and I was just

57:58

astonished at how small each step was,

58:01

but how because they were small, they

58:03

didn't go wrong and they would compose

58:05

beautifully and you could do a huge

58:06

amount with this sequence of little

58:08

steps. And that really blew blew my mind

58:10

away. I thought, "Wow, this is a big big

58:13

deal." But Kent was at the time his

58:16

energy was to write the first extreme

58:18

programming book, the white book. He

58:19

didn't have the energy to write a

58:20

refactoring book. So I thought, well,

58:22

I'm going to do it then. [laughter]

58:24

And I started by, you know, whenever I

58:27

was refactoring something, I would write

58:29

careful notes. And partly to because I

58:31

needed it for myself. How do I extract a

58:34

a extract a method so as I don't screw

58:37

it up? And so I would write careful

58:39

notes on each one. And then each of

58:41

those turned it the mechanics in the

58:42

refactoring book would be that step. And

58:44

then I' I'd make an example for each

58:46

one. And that was the first edition in a

58:49

book. And that and I did it in Java, not

58:50

in small talk because small talk was

58:52

dying sadly. And Java was the language

58:55

of the future, the only programming

58:56

language we'd ever need in the future in

58:58

the in the late 90s. And so that's what

59:01

led to the um to the first book. And um

59:05

the impact well I mean and also

59:08

refactoring. I should also stress it

59:10

wasn't invented by Kent. I mean it was

59:13

very much um developed by um Ralph

59:16

Johnson's crew at the University of

59:17

Illinois the Bona Champagne. They built

59:19

the first refactoring browser in Small

59:21

Talk which is the first tool that did

59:23

the automatic refactoring. that we talk

59:24

about now. That was the original the

59:26

refactoring browser built by um um I'm

59:30

blanking on John Brandt and Don Roberts

59:32

>> um did that and um then when the book

59:36

came out that got more interest there

59:38

was already some interest from the IBM

59:41

visual age folks because they came out

59:43

of Small Talk. the original versions of

59:45

Visual Age were in fact built in Small

59:46

Talk. Um, and so they were already aware

59:49

of what was going on to some degree, but

59:50

it was the Jet Brains folks that really

59:52

caught the imagination because they put

59:53

it into the early versions of Intelligj

59:56

idea and really ran with it. Then you

59:57

ran into it with ReSharper, of course.

60:00

Um, and um, they really made the

60:02

automated refactorings become something

60:04

that people could rely on, but it's

60:06

still good to know how to do them

60:07

yourself because often you're in a

60:08

language where you haven't got those

60:09

refactorings available to you. So it's

60:11

nice to be able to pull out that stuff

60:13

and some of them aren't obviously in

60:15

there and yeah so the impact it's had is

60:17

refactoring became a word and of course

60:18

like all of these words got horribly

60:20

misused and people use refactoring to

60:22

mean any kind of change to a program

60:23

which of course it isn't because

60:25

refactoring is very strictly these very

60:27

small semantics pres behavior preserving

60:30

changes that you make tiny tiny steps I

60:34

always like to say each step is so small

60:36

that it's not worth doing but you string

60:38

them together and you can really do

60:40

amazing things I I I think we've all had

60:42

that story. At least I had a story where

60:44

one of my colleagues or you know it

60:45

could have been me but often times one

60:47

of my colleagues would say like oh stand

60:49

up saying like oh I'm I'm just going to

60:51

do a refactoring and then next day oh

60:54

I'm still doing the refactoring next day

60:56

oh I'm still doing the refactoring and

60:58

[laughter]

60:59

you know that that that missed a part of

61:00

the small changes for sure. What made

61:02

you do a second edition for the book 20

61:05

years later in 2019 which was fairly

61:07

recent? Well, it was a sense of um

61:09

wanting to refresh

61:12

um some of the things that were in it.

61:14

Some there were some new things that I

61:16

had. I was also concerned that I mean

61:18

when you've got a book that's written in

61:20

late 1990s Java, it it shows its age a

61:23

bit.

61:24

>> Yes. [laughter]

61:24

>> And although the core ideas I felt were

61:26

sound and people could still use it, I

61:28

felt you coming giving it a more doing

61:31

it in a more modern environment. And

61:33

then the question was which you know

61:35

would I stay with Java or did I switch

61:36

to another language and in in the end I

61:38

decided to switch to JavaScript. I felt

61:40

it would reach a broader audience that

61:41

way and also allow a less

61:45

object-oriented centered way of

61:46

describing things. So instead of extract

61:49

method it's extract function because of

61:51

course it's the same process for

61:52

functions and also some things that you

61:55

you don't wouldn't necessarily think of

61:57

doing in an object-oriented language.

61:59

But um it was mainly just to to get that

62:01

refresh um to redo the examples to

62:05

really hopefully give it another 20

62:07

years of life because it's got to keep

62:09

me going until I croak, you know.

62:10

[laughter]

62:11

>> Yeah. So you published this book 25

62:14

years ago or 26 years ago in the

62:16

industry based on your interactions with

62:18

developers. How has the perception of

62:20

refactoring changed? because in the book

62:22

you you specifically wrote that you you

62:24

see refactoring as a key element in the

62:26

software development life cycle and

62:28

you've also talked about how when you

62:30

refactor uh the overall cost of changing

62:33

code over time can be a lot cheaper. Was

62:35

there a time where there was a lot more

62:37

uptake on this or is there still or or

62:39

do you feel it's kind of like a little

62:41

bit like being

62:43

maybe refracting went a little bit out

62:45

of style as some of those really

62:47

innovative tools at the time like Jet

62:49

Brains and others. They're maybe not as

62:53

as uh kind of referenced even though

62:55

they're everywhere.

62:56

>> It's hard to say for me. Um because I

62:59

mean again most of the interaction I

63:01

have is with folks at Fort Works. they

63:03

tend to be more clued up with this kind

63:05

of stuff than the average developer.

63:06

Certainly, I read plenty of things on

63:09

the internet that make me just shake my

63:10

head at

63:11

>> how even refactoring is being described,

63:13

let alone the lack of doing it in the

63:16

and certainly in the kind of structured

63:18

way, controlled way that that I like to

63:20

do it because I like doing it quickly

63:23

and effectively. And you know, it's one

63:24

of those things where the disciplined

63:26

approach actually is faster, even though

63:28

it may seem strange to describe it that

63:30

way. But I mean, I I have to it's at

63:34

least been part of our language now.

63:35

People talk about doing it. It's in

63:37

these tools and they do it very

63:39

effectively. The refactorings that they

63:40

do, I mean, it's wonderful to work in in

63:42

an environment where you can actually

63:44

automatically do so many of these

63:46

things. And so I feel we've definitely

63:48

made some progress. Maybe not as much as

63:50

I'd have hoped for, but you know, that's

63:52

often the way with these things.

63:53

>> Looking ahead with AI tools, they they

63:55

generate a lot more code a lot faster.

63:57

So, we're just going to have a lot more

63:58

code. We already have a lot more code.

64:00

>> How do you think the value of of

64:02

refactoring thinking about the your

64:05

intended meaning of of those small

64:06

ongoing changes is going to be

64:08

important? And are you already seeing

64:10

some of this being important?

64:11

>> I wouldn't say I'm already seeing it.

64:14

Um,

64:16

but I I can certainly expect it to be

64:18

increasingly important. Um, because

64:21

again, if you're going to produce a lot

64:23

of code of questionable quality,

64:25

but it works, then refactoring is a way

64:29

to get it to into a better state while

64:31

keeping it working. Um, these tools at

64:35

the moment can't definitely cannot

64:36

refactor on their own. Although we've

64:38

combined with other things. Adam

64:39

Tornhill does some interesting stuff

64:41

with combining LLMs with other tools to

64:44

be able to get a much more effective

64:47

route and I think that kind of approach

64:49

combining could be a good way to do it.

64:52

Um but definitely the refactoring

64:54

mindset and thinking how do I make

64:56

changes by basically boiling them down

65:00

to really small steps that compose

65:01

easily. That's really the trick of it.

65:04

The the the smallalness and the

65:05

composability. combine those two and you

65:08

can make a lot of progress.

65:10

>> It's interesting because because right

65:11

now if you want to refactor you need to

65:13

have your IDE open for sure. And I mean

65:16

the fast way is just just using the

65:18

built-in tools or you moving things

65:19

around. What what I found as well is

65:21

describing it when I have a command line

65:23

open with like cloth code or something

65:25

similar. It's it's tough or I I spend

65:28

more time explaining it than me doing

65:30

that that small change. And I do wonder

65:34

if uh if we will see more integrations

65:36

in this end as well so that LMS can

65:38

actually do it or some of them might do

65:40

it automatically cuz as you say it it

65:42

doesn't work out of the box but I think

65:44

for any quality software that I mean we

65:46

we all learn the hard way that if you

65:47

just kind of leave it there and don't go

65:49

back and don't change it up when your

65:51

when your functions get just the simple

65:53

things right when your function gets too

65:54

long when your class gets too long you

65:56

break it up otherwise you're not going

65:58

to understand it later. Yeah, it'll be

66:00

interesting as well to see if it

66:01

provides a way for us to control the

66:04

tool. I mean, one of the things that

66:06

interests me is where people are using

66:08

LLMs to describe queries against um

66:11

relational databases that turn into SQL.

66:15

You don't know how to get the SQL right,

66:17

but if you type the thing at the LLM, it

66:19

will give you back the SQL and you can

66:21

then look at it and say, "Oh, this is

66:23

right or not right." And tweak it and it

66:26

gets you started, right? And so

66:28

similarly with refactoring, it may allow

66:30

you to get started and say, "Oh, these

66:32

are the kinds of changes I'm looking at

66:34

and be able to make some uh progress in

66:36

that. I mean, particularly where you're

66:38

talking about these automated changes

66:40

across log large code bases." There was

66:41

an example of this was it a year ago or

66:44

so when one of these big companies

66:45

talked about this massive change and

66:47

made to change APIs and and clean up the

66:50

code and they mentioned it as an LM

66:52

thing, but it wasn't an LLM. It was a it

66:54

was that different tool and I'm

66:56

completely blanking on what the names of

66:58

all of these things were. Oh, I'd have a

67:00

60-y old brain and I can't be able to

67:02

remember anything anymore. It'll come to

67:03

me at some point. But actually, it was

67:05

it was a combination of, you know, maybe

67:07

10% LLM and 90% this other tool. Um, but

67:12

that was it again provided that extra

67:15

leverage that allowed them to to make

67:16

the progress. I think those kinds of

67:18

things are really quite interesting

67:19

using the LLM as a starting point to

67:23

drive a deterministic tool and then

67:25

you're able to see what the

67:26

deterministic tool is doing. That's I

67:28

think where there's some interesting

67:30

interplay. Speaking about going on from

67:32

refactoring to software architecture,

67:35

you were very busy writing books around

67:37

the early 2000. You wrote the book

67:39

patterns of enterprise application

67:40

architecture in 2002 and this was a

67:43

collection of more than 40 patterns uh

67:45

things like lazy load identity map

67:48

template view and many others and I

67:50

remember around this time there was your

67:52

book about enterprise uh architecture

67:55

patterns there was also the gang of four

67:57

book there was a lot of talk when I was

68:00

interviewing around that time on

68:01

interviews they were asking me questions

68:03

about how to do a factory pattern and

68:06

singleton and and all of these things

68:08

software architecture was talked about

68:10

my sense was in a lot of places or a lot

68:12

more. Then something happened something

68:15

starting from the 2010s I I no longer

68:18

hear most technologists talk about

68:21

patterns or architecture patterns. How

68:24

have you observed this period of when

68:26

the book came out? what was the impact

68:27

of it and why why was it important to to

68:30

talk about it and and put it into the

68:32

industry and how have you seen this

68:34

change of where we stopped talking more

68:38

on on patterns and why do you think it

68:39

happened?

68:41

>> Yeah, that I mean I've always found it a

68:44

I mean what you're doing with patents is

68:46

you're trying to create a vocabulary to

68:48

talk more effectively about this kind of

68:52

these kind of situations. I mean it's

68:53

just like in you know in the medical

68:55

world they come up with this jargon in

68:57

Greek and Latin to more precisely talk

68:59

about things that are quite involved and

69:02

complex. Yes.

69:03

>> And with patents what we're trying to do

69:04

is trying to evolve that same kind of

69:06

language except we're not doing it in

69:07

Greek and Latin. I certainly feel that

69:09

they they do help communication flow

69:12

more effectively. You know once people

69:13

are familiar with that terminology. I

69:16

mean you don't look at them as some kind

69:17

of you know how many of them can you

69:19

cram into the system you're building.

69:20

It's more a sense of how can you use it

69:22

to describe your alternatives and the

69:24

options that you have and also think

69:27

about more about when to apply things or

69:29

not apply them. I mean patterns are only

69:32

useful in certain contexts. So you you

69:34

you very much got to understand the

69:35

context of when to use them. And yeah,

69:38

it's it's kind of a shame that some of

69:40

the the wind has gone out of the sails

69:42

of that perhaps because people were

69:44

overusing them in terms of trying to use

69:46

them as a sort of a like pinning medals

69:48

on a chest. But it can still be very I

69:50

mean I I mean I worked very recently

69:52

with Unmesh on his book on dist patterns

69:54

in distributed systems and I felt that

69:56

was a very good way of coming up with

69:58

again a language to describe how we

70:01

think about the core elements and better

70:03

gain an understanding of how distributed

70:05

systems work which is an important

70:07

aspect of how to deal with life these

70:09

days because we're all building these

70:11

kinds of distributed systems. So I still

70:13

feel that they can be a very good way of

70:15

expressing that. I it's hard for me to

70:18

to get a sense of of why they can be

70:22

they kind of became less fashionable.

70:23

Maybe they'll become more fashionable

70:25

again. Who knows? But I I I'm always

70:27

looking for ways to try to spread

70:30

knowledge around and make things more

70:32

understandable. And I do feel that this

70:35

idea of trying to identify these create

70:38

these nouns that we can talk about

70:40

things more precisely is a good way of

70:42

part of doing that. I I wonder if

70:45

because I I've I' I've seen I've worked

70:47

at places where we we used these things

70:49

and then places where we just like threw

70:50

them out the window, no one was using

70:52

it. And a difference was honestly just

70:55

kind of the age and the attitude of the

70:57

company cuz there was a sense at some

70:59

point that the patterns there were for

71:02

legacy companies. So startups would just

71:04

start from a blank sheet of paper, you

71:06

know, a whiteboard, you know, UML was a

71:08

perfect example where UML had pretty

71:09

strict rules on how to do the arrows.

71:11

And if you do that, right, you could

71:13

even generate code and do all these

71:14

things. And at startups, the software

71:17

architecture still exists, but you just

71:18

put it on the whiteboard and you just

71:20

drew a box or a circle and you didn't

71:22

care about the arrows. And it was just

71:23

uh I guess we we're not going to lock

71:26

ourselves into existing ways of doing

71:29

things. And it's a bit of an education

71:30

as well like you do need to onboard to

71:33

these things. You all need to have a

71:34

shared understanding and maybe it's just

71:37

a combination of of of these two things.

71:39

And I guess it's a generational thing as

71:41

well. you know, every every few years a

71:43

new generation comes out and the same

71:45

way where at some point uh I I was one

71:47

of the first people in college where it

71:49

was super cool to use Facebook and it

71:51

was just all scholar students and then

71:53

when my parents went on there it was

71:55

super uncool to use Facebook or my

71:59

grandparents came on there like I I kind

72:00

of like stopped using it when they

72:03

started using it. So I I wonder if there

72:04

there's like like these waves going back

72:06

and forth because inside of these

72:08

startups there is a language uh like you

72:11

know lingo uh about how they talk about

72:14

the architecture and it starts to form

72:16

over time. You start to see it whether

72:18

it's longer tenure people you get more

72:20

and more of the jargon except it's not

72:21

in a book that anyone can read but you

72:24

have to go in there or go to similar

72:26

company where they take the jargon with

72:27

them.

72:28

>> Exactly. and and people will create

72:29

these jarens. Um, and it's an inevitable

72:33

part of communication. You need to you

72:35

need to can't explain everything from

72:37

first principles requiring five

72:40

paragraphs every single time. If you're

72:43

using the term all the time, you just

72:44

make a word out of it. And then

72:45

everybody creates their own words. And

72:47

all you're doing when you're coming up

72:49

with a book like the patterns of

72:51

distributed systems is you're trying to

72:52

say, "Okay, here's a set of words with a

72:54

lot of definition and explanation of

72:56

them. and let's hope we can kind of

72:58

converge on that so that we can

72:59

communicate a bit more widely. Um, but

73:02

it's also quite natural for people to

73:03

say, you know, within our little

73:05

environment, we create our own little

73:06

jargon. So, we don't take notice of that

73:09

and and then you get the the mismatches

73:12

that occur as you only you only really

73:14

notice that as you cross these different

73:16

environments.

73:17

>> Grady BHC had a interesting take on this

73:20

by the way. So I asked him about the

73:21

same thing because he's he's been so

73:22

much into software he still is into

73:24

software architecture and he's

73:25

progressed the field a lot and he said

73:28

that what he thinks happened is that

73:31

starting in like 20 cuz the patterns

73:35

died out from mainstream industry I'll

73:38

say again it's it's still in some

73:39

pockets but around the 2010s one

73:41

interesting thing that happened around

73:43

that time is cloud the cloud started to

73:45

get bigger AWS Google cloud and a lot of

73:48

companies started to build similar

73:50

things. They started to build either

73:52

initially on on premise backend services

73:54

where you had most of your business

73:55

logic later it moved to the cloud and

73:57

Grady said that the these hyperscalers

74:00

the cloud providers AWS for example they

74:02

they built all these services that are

74:04

really well architected so you can kind

74:06

of use one after the other and it's it's

74:08

well done you don't need to worry too

74:10

much about your data storage you just

74:11

use let's say Dynamob or or a managed

74:14

Postgress service so suddenly

74:16

architecture is not all that important

74:19

because these blocks take it care of

74:20

you. You have these building blocks and

74:21

now you're talking about using this

74:24

database on top of this system. His

74:26

observation was maybe architecture was

74:28

solved with a well architectured

74:30

building block that you could use and

74:32

you didn't have to reinvent the wheel.

74:34

>> Yeah. or but I suspect there's still

74:36

patterns of using these things and

74:38

that's something I haven't delved into

74:40

because I just haven't had the the

74:41

opportunity to

74:42

>> focus on that or more precisely I

74:45

haven't had enough of my colleagues uh

74:47

banging my banging me on the door with

74:49

with uh draft articles to be able to

74:52

publish on it.

74:53

>> Well, one pattern that I do see is every

74:55

every company you know names their

74:56

system. Some have wacky names, some have

74:58

logical names. But when you talk about

75:00

architecture, you typically talk about

75:01

like you know like like at at at Uber we

75:03

had the the bank emoji service which

75:05

called which was be migrated to Gulfream

75:08

which was you know these all sound like

75:10

doesn't make too much sense if if if

75:12

you're from the outside. Sometimes they

75:14

have like proper names they try with

75:15

that the payment profile service but

75:17

then there's a new version and that's

75:18

now the payment pro that's PP PP2

75:21

anyway. But inside any every company

75:23

like you will talk about these specific

75:25

names and you will talk about how they

75:27

work, how small they are, how large they

75:29

are and that's kind of I feel that's

75:31

oftentimes the lingo.

75:32

>> Yeah, it is. It becomes that's again

75:34

again part of the lingo of larger

75:35

organizations and again you take a

75:38

company that's been around for much

75:40

longer than Uber and of course that

75:42

lingo is baked into the organization.

75:44

can take you several years just to

75:46

figure out what the hell's going on

75:47

because it just takes you that long to

75:50

learn all of these systems and how they

75:51

interconnect.

75:52

>> Well, one of the fascinating

75:53

conversation that I had many years ago

75:54

was someone very high up in American

75:57

Express and we were talking about how uh

76:01

he was responsible for rearchitecting

76:02

their system to the next generation. And

76:05

uh he was just getting ideas on how to

76:07

socialize ideas and and get things out.

76:09

And I asked how long have you been

76:10

working on this? It has been 3 years.

76:12

And I was like, "Okay, so we're we're

76:14

like where are you are you like done?"

76:15

He's like, "No, no, this is just a

76:16

planning like like [laughter] we're

76:18

we're close to finishing the planning."

76:19

And to me, it didn't compute because

76:21

like in 3 years of planning. But again,

76:25

once you I started to understand the the

76:26

the scale of the business, how much

76:29

money, how many legacy systems they

76:31

have, half of half of what he did was

76:32

talk with business stakeholders to

76:34

convince them or get buy in. Um I guess

76:37

this event eventually happens with like

76:40

most companies except when you're at the

76:41

younger company or digital first or tech

76:44

first companies meaning founded in 2010

76:46

or later. You still don't see this but

76:48

it it might come in 10 years.

76:49

>> Oh yeah it it certainly will. Oh, it's

76:52

interesting. It there's I remember

76:53

chatting I was chatting with somebody

76:55

who had joined a bank uh an established

76:58

bank and they had joined from a startup

77:02

and one of their jobs was to modernize

77:04

the way the bank stuff was going and the

77:07

comment was now we've been here 3 years

77:10

now I think I can understand the problem

77:13

I've got some idea of what what I can do

77:15

what can be done but it just takes you

77:18

that long to just really understand the

77:20

land where you are in this new landscape

77:22

because it's it's big and it's been

77:24

around a long time and it's complicated

77:25

and it's not logical because it's built

77:27

by humans, not by computers. And it's

77:30

not a logical system. And there's all

77:32

sorts of history in there because all

77:34

sorts of things happen because so and so

77:36

met so and so and had an arrow with so

77:38

and so. And all of these things kind of

77:41

percolate over time and this vendor came

77:43

in here and was popular over here and

77:45

then the person who liked this vendor

77:46

got moved to a different part of the

77:48

organization. and somebody else came in

77:50

who wanted a different vendor. And all

77:53

of this stuff builds up over time to a

77:54

complicated mess. And any big company is

77:57

going to have that kind of complicated

77:58

mess cuz it's very hard to not get that

78:02

that situation. And yeah, I mean, the

78:05

Uber's lucky that it's only, you know,

78:08

relatively young company, but it will

78:10

be, you know, assuming it survives in 50

78:13

years time, it'll be like American

78:15

Expresses, right?

78:16

>> Yep. you can already see the the changes

78:19

the the the layers of processes and so

78:20

on which is kind of nec like it's

78:23

necessary so as you grow speaking of uh

78:26

change and iteration uh on and agile so

78:30

you were part of the 17 people who

78:32

created the agile manifesto and I

78:34

previously asked Ken Beck about this who

78:36

was another person involved can you tell

78:38

me from your perspective what was the

78:40

story there on on how you all came

78:42

together how this pretty chaotic I I

78:44

think day played out And what was the

78:48

reception as as you recall back then?

78:50

This was 2001,

78:51

>> right? So I mean the the origin of it I

78:55

always feel was actually a meeting we

78:57

had that Kent ran about a year before we

79:00

did the agile manifesto and it was a

79:02

gathering of extreme programming folks

79:04

who were working with extreme

79:05

programming and we had it at this uh

79:08

place near where Kent was living at the

79:10

time in middle of nowhere Oregon and uh

79:13

he also invited some people who weren't

79:16

directly part of the extreme programming

79:18

group folks like Jim Highmith um along

79:20

as well and part of The discussion we

79:22

had was should extreme programming be

79:25

the relatively narrow thing that Kent

79:27

was describing in the white book or

79:29

should it be something more broad that

79:31

had many of the similar kind of

79:33

principles in mind and Kent decided he

79:35

wanted something more concrete and

79:36

narrow and then the question is well

79:38

what do we do with this broader thing

79:39

and how it overlaps with things like

79:41

what the scrum people were doing and all

79:42

that kind of stuff that's what led to

79:44

the idea of getting together people from

79:47

these different groups and we had the

79:49

argument about whether we were going to

79:50

hold it in Utah because Alistister

79:53

wanted it in Utah and then Dave Thomas

79:54

wanted to have it in Anguila in the

79:56

Caribbean and for whatever reason we

79:58

ended up in Utah um and the skiing

80:02

and so we and we gathered together the

80:04

people that we did and of course it was

80:06

a case of who actually came along and

80:09

because obviously lots of people were

80:10

invited who didn't come um and I wasn't

80:12

terribly involved with that although Bob

80:14

Martin does insist that I was involved

80:17

got involved in he mentioned some lunch

80:19

in Chicago which is very likely because

80:21

I was going to Chicago all the time for

80:22

works at the time. So, I probably did,

80:24

but I don't remember. Um, and of the

80:26

meeting itself, I actually don't

80:27

remember very much of it, which is a

80:29

shame. I I I, you know, curse myself for

80:32

not writing a detailed journal of of

80:34

those few days. Um, I'd love to know,

80:37

you know, how did we come up with that

80:39

this over that um structure for the

80:41

values, for instance, which I think was

80:43

really wonderful, but I have no idea how

80:45

it got how that got put together. So,

80:47

unfortunately, I get very vague about

80:48

the actual doing of it. I do remember

80:50

have a have a fairly clear memory

80:53

although we should be wary about that.

80:55

I'll come to that perhaps later about

80:57

why of uh Bob Martin being the one who

81:00

was really insistent on I want to make a

81:02

manifesto and me thinking oh well yeah

81:05

we can do that it'll the manifesto

81:07

itself will be a complete useless and

81:09

ignored of course but the exercise of

81:11

writing it will be interesting it

81:13

>> um and that was my reaction to it and

81:15

matter how I felt about the manifesto I

81:17

felt ah nobody will take any notice of

81:19

this oh wow

81:20

>> but um hey we're having fun writing it

81:23

and we're going we're understanding each

81:24

other etc. And that will be the value,

81:26

right? We'll understand each other

81:27

better.

81:28

>> And then of course the fact that it made

81:29

a bit of an impact was kind of a shock.

81:32

And then of course it it gets misused by

81:34

by most of the time because there's

81:36

there's that lovely quote from

81:38

Alistister Cobin that your brilliant

81:39

idea will either be ignored or

81:41

misinterpreted and you don't get to

81:42

choose which of the two it is.

81:44

>> Well, it also helps that the manifest to

81:45

us four different lines and so people

81:47

just pick and choose which one they want

81:50

to point.

81:51

>> 12 principles.

81:51

>> Oh, and the 12 principles which Yes. and

81:54

and the fact that it says and says at

81:56

the beginning we are uncovering

81:58

um and that this is a continuous process

82:00

and what the manifesto is just this is

82:02

what we've got how how we got so far um

82:04

so it's a snapshot of a point in time of

82:06

where we were in 20201 yeah all sorts of

82:09

subtleties to to the to the manifesto

82:12

but um it I think it had an impact in

82:15

the sense that I my feelings were was a

82:18

certain way that we wanted to write

82:20

software at Fort Works for our clients

82:22

in 2000 000 and it was a real struggle

82:25

because they didn't want to work the way

82:26

we wanted to. We've said we want to put

82:28

all this effort into writing tests and

82:30

we was we want to have a build pro an

82:32

automated build process and we want to

82:34

do these kinds of things. We want to be

82:36

able to progress in small increments.

82:38

all of these kinds of things which were

82:40

anathema. You know, no, we got to we've

82:42

got to have a big plan over five years

82:44

and we'll spend two years doing a design

82:46

and we'll produce a design and then

82:48

it'll get implemented over the next year

82:50

or so and then we'll start testing,

82:53

right? I mean, that was the the

82:54

mentality of how things ought to be

82:57

done.

82:57

>> Yeah. That was just the common the

82:59

commonly understood wisdom, right?

83:01

>> Yeah. And our notion of no, we we'd like

83:03

to do that entire process for a subset

83:05

of requirements in one month, please.

83:07

Only a month. And of course we really

83:08

wanted to do it in a week but you know

83:10

baby steps. And so to me the great thing

83:13

about agile is that we can actually go

83:16

into organizations and operate it much

83:19

closer to the way that we'd like to be

83:21

able to do. Our clients will let us work

83:23

the way we want to to much greater

83:25

extent than we would were able to do

83:27

back in 2000. And so that is the

83:30

success. I just wanted the world to be

83:32

safe for those people that wanted to

83:33

work that way to be able to work that

83:35

way. Yeah. there's all sorts of other

83:36

bad things that have happened as a

83:38

result of all of this. But um on the

83:40

whole I think we are a bit better off

83:44

>> and and do you see like the the way you

83:45

look especially when you look at the

83:47

enterprise clients that that you have a

83:49

lot more visibility to you see the

83:52

definite change from like 25 years ago

83:54

to like the the concepts of agile are

83:56

way more accepted like working with the

83:59

customer having a lot more incremental

84:01

delivery forgetting about these like

84:03

very long pieces of work like this is

84:05

it's just common everywhere right can we

84:07

say that or at least

84:08

>> I would We've made significant progress,

84:11

but compared to how we'd like it to be

84:14

and where our vision is, it is still a

84:17

pale shadow of what we want of what we

84:19

wanted. I mean, and I suspect most of

84:22

the 17 that are still with us would

84:24

agree with that. We still feel we can go

84:27

much much better than we c than we've

84:29

been, but we have actually made material

84:31

progress. And the thing is that we we

84:33

were always in that situation where you

84:35

know we're kind of nudging our our way

84:37

forwards much at a much slower rate than

84:40

we'd like to be. Yeah. Now of course AI

84:42

is is coming and it it is now every it

84:45

is everywhere and it will be everywhere

84:47

and one things with with AI so the core

84:50

idea behind agile was that you make

84:53

incremental improvements and the shorter

84:55

the better. Now with and you could then

84:58

build software that incrementally start

85:00

to improve. But today with AI,

85:03

especially with AI, there's going to be

85:04

more software everywhere. There already

85:06

is. And there's a sense that customers

85:08

don't necessarily want to wait for

85:09

incremental improvements. They they want

85:11

to see quality upfront. Do you think

85:14

that agile will work just as well with

85:17

with with AI with even shorter

85:19

increments or do you think we might

85:20

start to think about like some different

85:23

way to work with with AI putting on the

85:25

quality lens up front as well and

85:27

getting back to a little bit of you know

85:28

the spec driven development like getting

85:30

a version of the software that is just

85:32

great to start with. I don't know how

85:34

the AI thing is going to play out

85:36

because we're still in the early days. I

85:38

still feel that building things in terms

85:41

of small slices with the human sort of

85:45

humans reviewing it is still the way to

85:47

bet. What AI hopefully will allow us to

85:50

do is to be able to do those slices

85:52

faster um and maybe do a bit more in

85:55

each slice, but we need to it's I'd

85:59

rather get smaller, more frequent slices

86:03

than more stuff in each slice. improving

86:07

the frequency is usually what we I think

86:09

we need to do and just cycled out those

86:12

steps more rapidly. That's where I felt

86:14

we've had our biggest gains is is

86:17

through that more rapid cycle rather

86:19

than trying to do more stuff in the same

86:21

cycle as it were. And I I still get a

86:23

sense of that when talking to people

86:25

still saying, you know, can you look at

86:26

all of the things that you do in

86:28

software development and and increase

86:30

the frequency? Do half as much but in

86:33

half the time and and and speed up that

86:36

cycle. Look for ways to speed that

86:38

through. And also, you know, just look

86:40

at where what you're doing. Look for the

86:42

cues in your flow and figure out how to

86:45

cut those cues down. If you were able to

86:47

get some ideas from idea to running code

86:51

in two weeks, how do you get it down to

86:53

a week? Just try to constantly improve

86:56

that cycle time. And I still feel that

86:58

that's our best form of leverage at the

87:00

moment is improving cycle time.

87:02

>> Yeah. And I I've been talking with some

87:03

of the leading AI labs on how they use

87:05

it because of course they're going to be

87:06

on the bleeding edge. They will use

87:08

this. They're also in their own interest

87:10

to use their own tools. at Entrophic uh

87:13

the cloud the cloud code team one of the

87:14

creators of clot code Boris he shared

87:16

how he did 20 prototypes of of a feature

87:19

of how the progress bar when when you do

87:21

a task how it lists out different steps

87:24

and how it shows you where it's at and

87:26

he built 20 different prototypes that he

87:28

all tried out and and got feedback on

87:30

and decided which one to go in two days

87:32

and he and and he showed me so actually

87:35

he has he had videos he just recorded

87:37

these as he went the exact prompt that

87:39

he used the output and these were

87:40

interactive proto protypes. So they were

87:42

not just, you know, like on the paper,

87:43

but they were inside.

87:45

>> And to me, this was like, wow. Like if

87:47

if you would have told me I built 20

87:49

prototypes and you asked me how long it

87:50

took it, I would have said two weeks,

87:52

maybe a week if you if they were small

87:54

like paper prototypes. But as you can

87:57

still speed it up and it is still

87:58

manageable. Some of them he threw it

88:00

away. Some of them he show shared with

88:02

small group, bigger group. So I I I feel

88:05

I feel you're right on how we have not

88:08

reached the limit of of how quickly can

88:10

we look at things.

88:12

>> Yeah, it comes back to feedback loops. I

88:14

mean so much of it is trying how do we

88:16

introduce feedback loops into the

88:18

process? I mean how do we tighten those

88:20

feedback loops so we get the feedback

88:21

faster so that we're able to learn

88:24

because in the end again it it comes

88:25

back to you know we have to be learning

88:27

about what it is we're trying to do.

88:29

Speaking about learning uh and keeping

88:30

up to date uh how do you learn about AI?

88:34

How do you keep up to date with with

88:35

what's happening? What approaches work

88:36

for you? And what are approaches you see

88:39

your colleagues uh follow who are also

88:42

staying up with you know what's going

88:43

on? Well, the main way I learn these

88:46

days is by working with people who are

88:48

writing articles that um are going on

88:50

onto my site because my primary effort

88:54

these days is getting good articles onto

88:56

my site and and my view is that I'm not

88:59

the best person to write this stuff

89:00

because I'm not doing the day-to-day

89:02

production work. Haven't been doing for

89:04

a long time. The only production code I

89:06

write is ironically the code that runs

89:08

the website. I still write code. I still

89:10

generate stack traces but it's only

89:12

within this very very esoteric little

89:14

area. Um so as a result I it's better

89:17

for me to work with people who actually

89:19

are doing this kind of work and help

89:21

them get their ideas and what their

89:23

lessons and express them to the as many

89:26

people as possible. So I'm learning

89:28

through the process of working with

89:30

people to write their ideas down which

89:32

is a very interesting way of learning

89:33

because of course you're you're very

89:35

deeply involved in in the editing

89:37

process for a lot of that material and

89:39

that was that's my primary form. I do do

89:42

some experimentation when I get the

89:44

chance not as much as I'd like but I do

89:46

see that as a second priority to working

89:48

with people. So you know it's necessity

89:51

only in the in the off time that I get

89:53

to do that. Um and of course reading

89:56

from where I feel are some of the better

89:58

sources. I mean fortunately one of those

89:59

better sources is Bita who has been um

90:01

writing with me. So that's good. Um

90:04

Simon

90:04

>> he's excellent. Yeah.

90:05

>> Spittita stuff is superb. Um Simon

90:08

Willis I keep an eye on what he's doing

90:10

all the time. Um I wish I had his energy

90:15

work rate for getting stuff out.

90:16

Actually I wish I had your energy you

90:18

the man of stuff you get out these days.

90:20

And so I look for sources like that. I'm

90:22

always interested in what folks like

90:24

Kent are up up to because let's face it,

90:26

so much of my career has been leeching

90:28

off Kent's ideas and um there's no

90:32

reason to stop doing that if it's still

90:33

working, right? Um and so those are the

90:36

kinds of sources I mean then sometimes

90:38

some books that come out that come

90:39

through and and work through those. So a

90:42

lot of it is in that kind of direction.

90:44

I might even watch a video occasionally

90:46

although I really hate watching videos.

90:47

So yeah. So sounds like find the sources

90:50

of the people you trust, the sources you

90:51

trust. Again, your your blog I can very

90:53

much recommend it because you have

90:55

several people writing on it. Uh so you

90:58

actually have a pretty good frequency of

91:00

in-depth articles about interesting like

91:02

I I I rarely see topics that have been

91:05

discussed in depth and so I I enjoy

91:07

checking checking out because of because

91:09

of it. I mean one of the questions that

91:11

I've been I've been pondering on is when

91:14

asked of so how do you identify what a

91:16

good source is of information and this

91:19

is more general this is due to to our

91:21

profession but of course due to to the

91:23

world generally as we seem to be in an

91:25

epistemological crisis of trying to

91:27

understand what's going on in the world

91:29

and and at some point I'm going to sit

91:31

down and write this down and I'll get a

91:32

more coherent um answer from it but part

91:36

of what I'm always looking for is um a

91:39

lack of certainty is I think a good

91:42

thing when people tell me oh I know the

91:44

answer to this I'm usually a good bit

91:47

more suspicious and I'm much more

91:50

conscious of when people say this is

91:52

what I understand at the moment but it's

91:54

fairly unclear I I remember one of my

91:57

favorite early books when I was writing

91:59

on the the um software architecture

92:03

um I was des I remember desperately

92:04

looking for something in the Microsoft

92:06

world as opposed to something in the

92:08

Java world there was a lot being written

92:09

written in Java world. This is back

92:10

around the late 90s. Lots of stuff was

92:13

being written in Java land, not much in

92:15

Microsoft land. And when I discovered

92:17

this Swedish guy, Jimmy Nielson. And his

92:19

book was full of stuff that says, well,

92:22

this is how I'm feeling about this is

92:24

the way to approach this stuff. He was

92:26

very tentative all the time, very much

92:28

clear of this was how he was currently

92:30

feeling, but he understood that things

92:33

might change. I've since got to know

92:35

Jimmy really well and he's a fantastic

92:37

guy. But what impressed me so much and

92:39

what influenced me so much is I felt

92:41

very much the degree to which oh this is

92:44

somebody I can trust because they're not

92:46

trying to give me this false sense of

92:49

certainty and confidence and I think

92:51

that's important also someone who's keen

92:54

to explore nuances and saying well this

92:57

works in these circumstances not if

92:59

somebody tells me oh you should always

93:01

use microservices or somebody says you

93:03

should never use microservices I mean

93:05

those both of those arguments can

93:07

completely discounted. It's when you

93:09

say, "Ah, these are the factors that you

93:11

should be considering about whether to

93:12

go in this direction or that direction."

93:14

Whenever someone is stepping back and

93:15

saying, "Ah, it's it's a trade-off.

93:18

There's various things involved. Here's

93:19

the factors you should go." And it's not

93:21

going to be a simple answer. You've got

93:23

to dig into the nuances. Then again,

93:25

that increases my confidence because

93:27

again, I'm feeling this is someone who's

93:30

thinking these things through and not

93:32

just coming on a on a sort of simple

93:34

railroad and and going down it. And I

93:36

guess with these sources, you can also

93:38

trust that everything we do in software

93:41

engineering, it's going to be

93:42

trade-offs, right? The the most common

93:43

answer of of like how long will it take

93:46

is it depends. It depends on on are we

93:49

doing a prototype, it depends on on do I

93:52

know the technology, etc. So if you if

93:54

you're reading sources or if you're

93:55

accessing sources where they tell you,

93:59

okay, in my situation, you actually

94:00

learn about their situation and you can

94:03

figure out like, okay, in this specific

94:05

case for them, this worked or it didn't

94:07

work and later you can probably apply it

94:09

a bit better because again, it's it's

94:11

very different if you're going to be

94:12

working as a software engineer inside a

94:14

highly regulated retailer that's 70

94:16

years old versus you've just started a

94:18

brand new startup where go and knock

94:20

yourself out, zero customers.

94:23

a huge difference. Yeah. And then that's

94:25

I mean and again you see it I mean we

94:28

see it with we frankly we see it with

94:30

clients a lot of clients say give us the

94:31

answer give us the the the cookbook

94:34

straightforward answer that I just need

94:35

to apply. Yeah. If you're looking for

94:37

that kind of cookbook answer you're

94:38

going to get in trouble because anybody

94:40

who will tell you there's a cookbook

94:41

answer. They either don't understand it

94:43

or they're deliberately covering it up

94:45

for you because there's always tons of

94:47

nuance involved. We we we keep going

94:49

back to this like now more than 50 year

94:51

old art the no silver bullets right one

94:54

question uh I got from online I asked

94:56

what people would like to ask from you

94:58

is what would your advice be today for

95:02

junior software engineers who are

95:03

starting out there's all this AI stuff

95:05

going on we know with with learning I

95:07

think you also mentioned or it might

95:09

have been Umesh who mentioned with

95:10

junior engineers it it it could be a bit

95:13

iffy of if you're relying too much on AI

95:16

will that hinder your learning because

95:18

learning is important. If one of these

95:19

engineers asked you like, "Hey, I'm a

95:21

junior engineer. I'd like to eventually

95:23

become a more experienced engineer, what

95:26

tactics would you advise me, especially

95:28

with AI tools? Should I rely on them?

95:30

Should I not? Is is there something that

95:33

might work better than other things?"

95:35

>> Well, I mean, certainly we have to be

95:38

using AI tools and exploring their use.

95:40

The hard part with if you're more junior

95:42

is you don't have this sense of is to

95:45

what extent is the output I'm getting

95:47

good and in many ways the answer is what

95:50

it's always been find some good senior

95:52

engineers who will mentor you because

95:54

that's the best way that you're going to

95:55

learn this stuff and a a good

95:59

experienced mentor is worth their weight

96:01

in gold and in fact many ways it's worth

96:06

prioritizing that above many other

96:07

things that you when it comes to your

96:09

career is is getting that met. I mean,

96:11

again, me finding Jim Odell early on in

96:14

my career was enormously valuable. The

96:17

best thing that could have possibly

96:18

happened to me was just blind luck. Um,

96:21

but seek out somebody like that who can

96:23

be your mentor. I mean, although we're

96:25

peers in some ways, I often see think of

96:27

Kemp Beck as a mentor. Um, because you

96:30

know, we may be the same age or

96:32

whatever, but his thinking is always

96:34

leaping forwards. And so, watching what

96:37

he's doing has been very val. So again,

96:39

find somebody like that. The AI can be

96:42

handy, but always remember it's gullible

96:45

and it's likely to lie to you. So be

96:49

probing on asking it. Okay, why do you

96:52

giving me this advice? What are your

96:54

sources?

96:55

What what's leading you to say this? I

96:57

mean I I remember this this is generally

97:00

a good thing is whenever people are pro

97:02

giving you something is to say what is

97:04

leading you to say that? What is the

97:07

background? what is the context you're

97:08

coming from? What are the things that

97:10

are leading you to this point of view?

97:13

And by probing that, you can get a

97:16

better understanding of where where

97:17

they're coming from. And you I think you

97:20

have to do the same thing with the AI

97:21

because in the end the AI is is it's

97:23

it's just regurgitating something it saw

97:25

on the internet. So the question is did

97:28

it see good stuff on the internet or did

97:29

it see most of the crap that's on the

97:31

internet, right? And but if you can find

97:33

your way to the good stuff, then that

97:35

can be much more useful.

97:37

>> And looking at all this change that

97:38

that's happening right now with AI LMS,

97:41

how do you feel about the tech industry

97:43

in in general?

97:44

>> I mean, in in a broad sense, I'm

97:46

positive because I' I still feel there's

97:48

a hu so many huge things that can be

97:50

done with technology and software. Um,

97:53

and we are on, you know, we're still in

97:55

a situation where demand is way more

97:57

than we can imagine. But that's a

97:59

long-term view. I mean at the moment

98:01

we're in this very I'm going to say very

98:04

str it's life has always been a strange

98:06

phase. I mean in strange in different

98:08

ways. The current strangeness is we're

98:11

basically in a huge certainly in um in

98:16

uh the developed world depression. I

98:18

mean we've seen a huge amount of job

98:20

layoffs. I mean I I've heard numbers

98:22

banded around of quarter million half a

98:25

million jobs lost. I mean it's that kind

98:27

of magnitude. I mean, we're seeing it. I

98:30

mean, at Fort Works, we used to be

98:32

growing at 20% a year all the time until

98:35

about 2021. I mean, we've we've we've

98:38

hit a wall, and we see our clients are

98:41

just not spending the money on um this

98:45

stuff. I mean, AI is doing its own

98:47

separate thing, but it's almost like a

98:49

separate thing going on, and it's

98:51

clearly bubbly, but we don't But the

98:53

thing with bubbles is you never know how

98:55

big they're going to grow. You don't

98:56

know how long it's going to take before

98:58

they pop. and you don't know what's

98:59

going to be after the pop. I mean, all

99:02

this stuff is unpredictable. I do think

99:04

there's value in AI in a way that say

99:06

there wasn't with blockchain and crypto.

99:08

There's definitely stuff in AI, but

99:10

exactly how it's going to pan out, who

99:11

knows? And I mean, I went through this

99:12

cycle with stuff in in the '9s and 2000.

99:16

So, it's it's it's a repeat of that only

99:18

at probably an order of magnitude more

99:20

scale. Um, so all of that's going on,

99:23

but really what's happening, the most

99:25

important thing that's hit us is not AI.

99:27

It's the end of zero interest rates.

99:29

That's the big thing that really hit us.

99:31

And that's what the job losses started

99:33

before AI because of that kicking in and

99:37

we don't know how that's going to change

99:39

because this is a this is a much more

99:40

macroeconomic thing. We have Looney

99:43

driving the bus in the in the United

99:44

States. We have all sorts of other

99:47

pressures going on in internationally.

99:49

Great uncertainty at the moment and

99:51

that's affecting us because it means

99:53

that businesses aren't investing. And

99:55

while businesses aren't investing, it's

99:57

hard for to to make much progress in in

100:00

the software world. And so we have this

100:02

weird mix of no investment pretty much

100:05

depression in the in the software

100:07

industry with an AI bubble going on. And

100:09

they're both happening at the same time.

100:11

>> And one of mass the other end, yeah,

100:13

depends on where you are. Like I was in

100:15

Silicon Valley and if you're an AI

100:16

company, it all inside it looks all

100:18

great. If you're outside, again, you can

100:21

benefit from it, but it's it's it's a

100:23

lot more careful. And if you're outside

100:25

of this bubble, let's say you're at a a

100:26

startup or or a company that is not in

100:28

AI, it's it's just tough. So, you you

100:31

have these these worlds happening. I

100:33

mean, this is still, I think, an

100:35

industry with plenty of potential in the

100:36

future. I think it's a it's a good one

100:38

to get into. It's not uh you know, the

100:40

timing is not as great as it would be

100:42

getting into this industry in say 2005.

100:46

Um but you know it I still feel there's

100:49

a there's a good profession here. I

100:50

don't think AI is going to wipe out

100:52

software development. Um I think it'll

100:54

change it in a really manifest way like

100:56

the change from assembly to high level

100:58

languages did but the core skills are

101:01

still there and the core skills of being

101:03

a good software developer in my view are

101:05

still it's not so much about writing

101:07

code. That's part of the skill. A lot of

101:09

the skill is understanding what to write

101:12

which is communication and particularly

101:14

communication with the users of software

101:16

and crossing that divide which has

101:18

always been the most critical um

101:21

communication path

101:22

>> and and you've also mentioned the expert

101:24

general is becoming a lot more important

101:26

which all of that when I looked into the

101:28

details we'll link it in the the show

101:30

notes the article that I think it was

101:31

again

101:32

>> unmesh has been on fire he's on fire but

101:37

but it all all the traits seem to do

101:39

nothing to do with AI. It's about

101:41

curiosity. It's about going deep. Uh

101:43

it's about going broad. It it it sounds

101:45

like I'm I'm hearing more and more

101:47

people who are thinking longer of like

101:49

what it means to be a standout software

101:51

engineer. The basics don't seem to

101:52

change,

101:53

>> right? Yeah. I and I I do think that and

101:56

it is it is always been communication

101:59

and being able to collaborate

102:01

effectively with people has always been

102:03

to my mind the outstanding quality of

102:05

what really makes the the very best

102:07

developers. um come through certainly in

102:10

the enterprise commercial world which is

102:13

the one I'm most familiar with because

102:15

every soft all the software we're

102:17

writing for is for people who doing

102:19

something very different to what we do.

102:20

I remember when I was working in health

102:22

service I mean I always said you know

102:23

here I am doing this conceptual modeling

102:26

of health care. I understand a huge

102:27

amount about the process of of health

102:29

care. You are not going to want me to

102:32

treat whatever your medical problems are

102:33

because I am never going to have that

102:35

skill because I'm not a doctor. Yeah.

102:37

>> And so therefore the doctors have to be

102:38

involved in the process.

102:40

>> So as closing I just wanted to do some

102:42

rapid questions where I I'll fire and

102:44

then you come what comes to mind. What

102:46

is your favorite programming language

102:48

and why?

102:49

>> Um I would say at the moment my favorite

102:51

programming language is Ruby because

102:52

it's become it's I'm so familiar with

102:54

it. I've been using it for so long. But

102:56

the one that is my love is small talk

102:58

without a doubt. Small talk. There was

103:00

nothing as much fun as programming in

103:03

Small Talk when I was able to do it in

103:05

the uh in the '9s. That that was such a

103:07

fantastic environment.

103:08

>> You and Kenbeck and Kenbeck is is

103:10

writing his Small Talk server. It's it's

103:13

it's his baby. I I think he's making

103:15

progress

103:15

>> and and I mean there is still stuff

103:16

going on. There is the Pharaoh project

103:18

in Small Talk. And I keep thinking, you

103:20

know, I if I could just take off some

103:22

weeks and and stop everything else I was

103:24

doing, maybe investigate, see what's

103:26

going on in the small talk world again,

103:28

cuz it was I mean, and has still so much

103:31

power in that language.

103:32

>> What are one or two books you would

103:34

recommend? Uh, and why?

103:36

>> So, a book I I do particularly like to

103:39

recommend is Thinking Fast and Slow by

103:41

Daniel Kaneman. I like it because um he

103:46

does a really good job of trying to give

103:49

you an intuition about numbers in and

103:52

spotting some of the many mistakes and

103:54

fallacies we make when we're thinking in

103:56

terms of probability and statistics. And

103:59

this is important in software

104:00

development and because I mean a lot of

104:02

what we do is greatly enhanced by the

104:04

fact if we could understand um the

104:06

statistical effects of what we see but

104:08

also in life in general because I think

104:11

uh our world would be a hell of a lot

104:13

better if way more people understood a

104:15

bit more about probability and

104:16

statistics. Yeah. Than they do. I mean I

104:19

like most kids probably when they did

104:21

maths at school it was heavily

104:22

calculus-based. I really do feel that it

104:25

would have been a lot better if you know

104:27

it was much more statistics based

104:29

because that the knowledge of being able

104:31

to use that. Well, I mean, one of the

104:34

things that has helped me more with

104:35

probabil probability is and

104:37

probabilistic reasoning has been the

104:39

fact that I'm heavily into tabletop

104:41

gaming where you have to constantly

104:43

think in terms of probabilistics and um

104:47

I I just honestly feel that knowing that

104:50

is important and this book is I think a

104:52

great way to get into that and so it was

104:55

one of the best reads I've had in the

104:58

last few years. Another book that I'd

105:00

mention that is completely separate and

105:02

is in challenging in a completely

105:04

different way that I've been totally

105:06

obsessed with is a book called The Power

105:08

Broker. Um, so this is a book about uh a

105:13

guy called Robert Moses who most people

105:14

have never heard of but was the most

105:17

powerful official in New York City for

105:19

about 40 years um from about 192 to

105:23

1960. He was never elected to any

105:24

office. He controlled more money than

105:26

the mayor or the governor of New York

105:28

during that time. And this book is about

105:31

how he rose to power. Um how power works

105:35

in a in a democratic society often in

105:38

not in plain sight. Um and it's a

105:41

fascinating book for that. It's also a

105:44

fascinating book because it is so well

105:46

written. There have been moments when I

105:48

would just, you know, I've been reading

105:49

a several page passage of something and

105:51

I would just have to stop to just

105:53

appreciate how brilliant what was just

105:55

read was. And that's valuable because to

106:00

be a better writer, and I think we all

106:02

gain by being a better writer, it's

106:04

really important to read really good

106:06

writing. And his writing is magnificent.

106:09

The downside is it's 1,200 pages. It's a

106:13

really long book, but I was enjoying it

106:16

so much that I didn't mind. And then

106:18

once you go on from that, you move on to

106:20

his second biography because he's only

106:22

written two biographies and that's his

106:24

currently five volume biography of

106:27

Lyndon Baines Johnson, LBJ, which is

106:29

equally brilliant and I've been reading

106:31

it, but it's a lot more to ask because

106:33

it's four volumes so far and he still

106:34

hasn't finished the fifth. But again,

106:37

there are moments when I was just

106:38

gobsmacked by how brilliant the writing

106:40

was and gossmacked by the way again

106:43

power works in a democratic society and

106:47

uh I think to understand how our world

106:49

works. These kinds of books are really

106:51

really valuable.

106:52

>> And finally, can you give us a board

106:53

game recommendation? You are very

106:55

heavily into board games. Your your

106:56

website has a list of them as well.

106:59

Yeah, it's a tricky one because it's

107:02

kind of like saying I'm really

107:04

interested in getting into watching

107:06

movies. Which would be the movie you

107:07

would recommend? Right. Because I get

107:08

it. So many different tastes and things.

107:11

If I'm going to pick something that's I

107:13

think not too complicated for someone to

107:16

get into that I think is is still got

107:18

quite a lot of richness at the moment, I

107:20

think the game I'd pick out would be

107:22

something called Concordia. It's fairly

107:24

abstract in its nature, but it's easy to

107:27

get into and it's got quite a good bit

107:28

of uh decision making in in the process.

107:31

>> Well, Martin, thank you so much. It was

107:34

great that we could make it happen in

107:35

person as well.

107:36

>> Yes, having that that worked out really

107:38

well. I just happened to be in Amsterdam

107:40

for something else and uh I know

107:42

somebody in Amsterdam, so I thought I'd

107:43

get in touch and we finally get the

107:45

chance to meet face to face.

107:48

>> It was amazing. Thank you.

107:49

>> Thank you. Thanks very much to Martin

107:51

for this interesting conversation. One

107:53

of the things that really stuck with me

107:55

is how the single biggest change with AI

107:57

is about how we're going from

107:58

deterministic systems to

108:00

non-deterministic ones. This means that

108:02

our existing software engineering

108:03

approaches that were based on assuming a

108:05

fully deterministic system like testing,

108:08

refactoring and so on, this probably

108:10

won't work that well and we might need

108:12

new ones unless we can make elements

108:14

more deterministic. That is I also liked

108:16

how Martin mentioned to us that the

108:18

problem with vibe coding is that when

108:20

you stop paying attention to the code

108:22

generated you stop learning and then you

108:24

stop understanding and you might end up

108:26

with software that you have no

108:27

understanding of. So be mindful in the

108:30

cases when you are happy with this

108:31

trade-off. For more reading on AI

108:33

engineering best practices and an

108:35

overview of how the software engineering

108:36

field changed in the past 50 years check

108:38

out related deep dives in the pragmatic

108:40

engineer which are linked in the show

108:41

notes below. If you've enjoyed this

108:43

podcast, please do subscribe on your

108:44

favorite podcast platform and on

108:46

YouTube. This helps more people discover

108:48

the podcast and a special thank you if

108:50

you leave a rating as well. Thanks and

108:52

see you in the next

Interactive Summary

This video features an interview with Martin Fowler discussing the impact of AI on software engineering, drawing parallels to historical technological shifts like the move from assembly to high-level languages. He touches on his career journey, the evolution of software development practices, and the importance of foundational principles like refactoring and agile methodologies. A significant portion of the discussion revolves around the shift from deterministic to non-deterministic systems brought about by AI, the challenges and opportunities this presents, and the need for new approaches to software engineering. Fowler also shares insights on the Thoughtworks Technology Radar, the nuances of "vibe coding," the importance of continuous learning, and the future of the tech industry amidst economic shifts and technological advancements.

Suggested questions

10 ready-made prompts