HomeVideos

Martin Fowler & Kent Beck: Frameworks for reinventing software, again and again

Now Playing

Martin Fowler & Kent Beck: Frameworks for reinventing software, again and again

Transcript

790 segments

0:05

Welcome everyone. It's it's it's so nice

0:07

to see all of you. It's so nice to see

0:10

lot lot of friendly faces. A lot a lot

0:12

of you said hi and also just really good

0:14

to to meet Martin and Kent. And I was

0:16

joking a little bit beforehand that I

0:20

did not expect Martin and Fowler and

0:22

Kent Beck to walk into a place where

0:24

it's all the kind of the hottest AI

0:26

startups and all of them. But here we

0:28

are. And we're here for a very, very

0:29

good reason. What?

0:32

>> What? What the hell is that supposed to

0:34

mean?

0:37

>> Oh. Oh gosh. Here we go.

0:40

So Kent is going to hate me for this cuz

0:43

today I already called him old furniture

0:45

once.

0:46

I don't think that's fair.

0:49

Old furniture. Yeah, you're the old guy

0:51

in the crowd.

0:54

>> I'm at least a year younger than him.

0:58

Uh but we're I I I'm psyched that both

1:01

of you are are here and a week ago me,

1:04

Martin Kent, and a bunch of other people

1:06

were in Deer Valley, Utah in the future

1:10

of software engine conference that

1:11

Martin pulled together some some very

1:14

interesting thinkers and we were talking

1:16

about like it was nice to reflect that

1:18

25 years ago the agile manifesto was

1:21

created.

1:29

There were 17 people and and two of you

1:31

and you were two of you were there.

1:34

Since then, you've really helped shape

1:38

software engineering as you've helped

1:41

influence and and you've had major

1:42

contributions. Could I ask both of you

1:45

to like recap what the feedback you've

1:47

gotten over over these these years,

1:49

these decades, what ideas really stuck

1:52

with engineers? what what what do you

1:54

hear a lot they tell you like thank you

1:56

for this I'm using a lot of this

2:00

>> that's a really interesting question I

2:02

haven't really reflected on that I mean

2:05

a lot of people talk about things in

2:07

general that we've worked on um whether

2:10

it's agile broadly or refactoring

2:13

particularly in my case because that was

2:15

a book I worked on um but I don't know

2:18

anything where any of the pieces of that

2:21

necessarily

2:22

>> so I Get thank you so much for test

2:25

driven development. I also get this test

2:30

test driven DEVELOPMENT

2:34

ruined my life.

2:36

My dog left me,

2:39

my house burned down, and it's all your

2:41

fault. So, I don't know. Does that

2:43

answer your question?

2:45

I I think TDD has has been very

2:47

divisive. I I've been a I' I've been a

2:50

convert at some point and then I I hated

2:52

it. But it's interesting because I feel

2:53

a lot of a lot of the are like this,

2:55

right? They're meant to be provocative.

2:56

They're meant to push you.

2:58

>> Yeah. I also got uh was actually

3:01

chatting with somebody uh yesterday

3:03

who's really pushing the AI envelope.

3:05

And his comment was, well, thank

3:07

goodness for all of your pushing of TDD

3:09

for the last 20 years because it's

3:11

really important now we've got AI

3:13

agents. And uh it it's interesting to

3:17

hear that feedback because um I'm always

3:20

suspicious of it cuz I want it to be

3:22

true. You know, I'm the kind of guy who

3:24

when when I hear something I like, I'm

3:26

I'm kind of thinking, am I just making

3:27

this up? But uh it does make sense to me

3:31

that, you know, when we've got a big

3:32

powerful genie, you really have to learn

3:34

how to verify that it's doing the right

3:36

thing for you,

3:38

>> which we've been practicing for 25

3:40

years.

3:41

>> Yeah. Well, I mean, I'm not a big

3:42

powerful genie myself, but I still

3:44

needed the tests to make sure I was

3:46

doing the right thing.

3:49

>> So, a lot a lot of folks here, they will

3:51

know you from the podcast that we did

3:53

together. They might have they probably

3:54

read some of your books. Barton, you're

3:56

really perfect writer, but can you tell

3:58

us what are you up to these days? What

4:01

does your day-to-day look like? How do

4:02

you stay in touch with technology? I got

4:04

a really rude comment that I'm still I

4:06

got to fight into someone on LinkedIn

4:08

about this. They said like, "Oh, your

4:10

your conference like you're having

4:11

authors here who are like out of touch

4:13

with technology." And I'm like, "Do you

4:15

even know who Martin Valer and Kemp Beck

4:18

is?" But I just want wanted to to ask

4:20

like these days, how what are what are

4:22

you up to? Seriously?

4:23

>> Well, my I when I finished the second

4:26

edition of Refactoring Book, which was,

4:28

you know, five or six years ago, I toyed

4:31

with the idea of writing another book.

4:32

I've got several half-written books out

4:35

there to work on, and I decided I should

4:37

not do that. Instead, what I should do

4:40

was work with people who are actually

4:43

still doing real work on real projects,

4:45

writing real code as it were. Um, and

4:48

get them to get their ideas out and what

4:51

they learn out. So, that's been my main

4:54

project ever since, primarily focused on

4:57

the martinfaller.com website because,

5:00

hey, I control that. There's no big

5:01

corporation that's going to sweep in and

5:03

and clobber it. At least not without my

5:06

um me selling out and getting lots of

5:07

money out of it. Um

5:10

and um as the AI thing has come, I've

5:14

been very keen to capture that. And what

5:16

my big focus is on is trying to re

5:20

understand details of people's workflow

5:22

and what exactly they are doing. what

5:24

are the kind of conversations they're

5:25

having with the genie as as Kent um

5:28

calls it and you know what if you're

5:31

reviewing things what are you looking

5:32

for if and particularly what decisions

5:36

are you are we the humans still making

5:38

and how is that decision flow changing

5:42

um so that's my interest is it's not

5:45

what I'm doing it's what my colleagues

5:46

are doing and trying to spread that

5:48

around and that's that's my focus

5:51

>> and so I've been

5:53

uh my personal mission in life is to

5:57

help geeks feel safe in the world. And

6:00

uh our people do not feel safe right now

6:04

for some good reasons and some not good

6:06

reasons. And one of the things I noticed

6:10

is for 25 years we've kind of had the

6:13

answers. Somebody comes to us and says

6:14

we have too many bugs. Or like all

6:16

right, well here's how you write tests.

6:18

Oh, I can't write tests. Well, here's

6:20

how you design so you can write tests

6:21

and just kind of press play

6:25

on the recorder. And the thing that's

6:28

changed is at this moment nobody knows

6:31

the answers to anything.

6:34

And so what I've been trying to do is

6:37

both for my own geeky curiosity

6:40

satisfaction,

6:42

let me go back into explore mode and

6:44

find out as well as I can

6:48

uh what you can do to be effective with

6:51

these new tools and then demonstrate

6:54

that to the next generation of people

6:57

who've been used to getting answers. Oh,

7:00

I'm having some trouble. let me look up

7:02

in the book what the solution is that

7:04

that worked for the last 20 years and it

7:07

doesn't work for the last year and won't

7:10

work for the an extended period. So, as

7:14

seniors, I figure it be behooves us to

7:16

demonstrate not just how to use these

7:18

tools effectively, but how to figure out

7:20

how to use these tools effectively

7:22

because that's a whole different set of

7:24

skills and you say all right we don't

7:27

know what what will come how no one has

7:29

to figure it out. I wanted to take you

7:32

back into your professional journey. One

7:34

thing that you share here you have seen

7:36

a lot more than a lot of us myself

7:38

included a lot of people in in the room

7:39

as well. Do you remember a time where

7:42

there was a technology change which

7:44

looked similarly kind of unpredictable

7:46

or scary like AI does right now? What

7:50

was the thing that comes the closest in

7:53

your career?

7:54

>> Well, nothing has hit with the magnitude

7:58

of AI. That's I mean this is a a whole

8:01

size difference from anything that we've

8:03

faced before. um on a smaller scale um I

8:09

would say and we were very much involved

8:11

in the the growth of object-oriented

8:13

languages I mean and that scared a lot

8:15

of people it didn't scare us so much

8:16

because we were part of it um I would

8:20

say that the impact of the internet um

8:22

had a huge impact upon us all um and of

8:27

course obviously we were spreading the

8:29

the the challenge of agile software

8:31

development and that had a very big

8:33

impact on a lot of organizations because

8:36

you could tell by how hard they resisted

8:38

it. Um, but the thing about AI is that

8:41

you know all of these things we were

8:44

talking about how important they were

8:45

and how valuable were and trying to

8:47

persuade people of the importance of

8:48

them. Yes, even the internet that may

8:50

sound surprising but there were people

8:52

who would weren't thinking that was

8:53

important. Um, but AI there's kind of no

8:56

argument about how important it is.

8:58

People can't I mean you cannot put

9:01

blinkers on to deny the importance of

9:03

this thing.

9:04

So the the other analogy that I have is

9:07

to the introduction of the

9:08

microprocessor.

9:10

Before that, computers were a big box.

9:12

You couldn't move them around. If you

9:14

wanted another one, you'd mortgage your

9:17

house again. It was a it was a big it

9:20

was a big deal. And I was a kid in

9:23

Silicon Valley with my dad as a

9:25

programmer when the Intel 404 hit and we

9:30

went, "Wait a minute, that's a

9:32

computer." Oh my goodness, the the

9:36

possibilities

9:38

suddenly expanded. If you can figure out

9:41

how to write software, if you can figure

9:43

out how to design hardware around this

9:45

thing, you can suddenly do things we

9:48

can't even imagine. And so, uh, I think

9:52

part of AI is this expansion expansion

9:56

of imagination. So, I'm writing projects

9:59

that are ridiculously ambitious. I'm uh

10:03

working on a persistent small talk. I

10:06

I'm writing library quality code for

10:09

Rust. I'm just, you know, anything I can

10:13

imagine to trying to do, I'm going to

10:15

try and do it and see. Now, a bunch of

10:18

those fail and that's fine. That's part

10:20

of this process. But um it it's not like

10:24

this is the first time the heavens have

10:26

opened and and we've been brought tons

10:30

of new opportunities

10:32

>> and back either with with

10:34

object-oriented spreading or with

10:36

microprocessors. Do you remember what

10:39

the the feeling was in the industry and

10:41

what was the difference between

10:42

experienced professionals who you know

10:44

just got

10:46

thrived in this new world and ones who

10:49

were just honestly left behind?

10:53

Yeah, there with all of those there was

10:55

that sense of the mix between the the

10:58

the people chasing the hype and the the

11:00

people who were saying no this is

11:02

nothing special. Um I think it you've

11:05

always got to have that balance

11:08

of skepticism and curiosity

11:11

um in order to be able to do that and

11:13

you are selective about it. I mean I

11:14

have been completely skeptical about

11:16

some big um changes. I mean um

11:19

blockchain for instance I was extremely

11:21

skeptical about that um and but as I

11:26

like to say you know about about my

11:28

skepticism about technologies which is

11:30

well rooted because I've seen so much

11:32

snake oil projected out by the industry

11:35

um over the years my skepticism has to

11:38

be absolute and total which means I have

11:41

to be skeptical about my skepticism and

11:43

that requires that curiosity and I think

11:47

that's where the thing is you've got to

11:48

be curious enough to say may this looks

11:51

like but maybe it isn't. How do I probe

11:55

in order to detect that there's signs of

11:57

of something coming out there?

11:58

>> Yeah. What what's the smallest

12:00

experiment I can run to verify to my own

12:03

satisfaction and everybody's level of

12:06

satisfaction is going to be different

12:08

whether or not this claim is true.

12:12

That's that's the skill that has

12:13

suddenly in the last year

12:16

become a thousand times more valuable is

12:19

that skill of saying what's the least I

12:21

can do to validate for my to my own

12:24

satisfaction whether this claim is true

12:26

or not.

12:26

>> But there's also another step in there

12:28

that you also got to be aware that your

12:31

early interactions may not actually be a

12:34

true signal. I mean, when I started

12:37

playing around with AI, I guess it was

12:40

the co-piloty like stuff about a year,

12:43

year and a half ago, I was pretty

12:46

unimpressed, right? I mean, I I set up

12:49

I'm an Emacs guy. I set up Emacs.

12:52

The one true editor. Um, I set it I set

12:57

up Emacs so that I could just have it

12:59

prompt and complete automatically

13:01

because, you know, Emacs is capable of

13:03

doing that. And I used it for maybe

13:06

three or four days before I just got

13:08

because sometimes it would give you

13:09

something wonderful but most of the time

13:11

it gave you such garbage that you would

13:12

just control K right away. And if that

13:15

had been my impression of AI and I have

13:18

said that's what I think of AI, I would

13:20

have just immediately flip the bozo

13:23

switch on it just like I did with

13:25

blockchain. But on the other hand, I'm

13:28

also probing out there. So my my

13:32

most valuable discovery in all of this

13:34

is in the room next door Simon Willis's

13:37

blog which I read and one of the things

13:40

that I took from that was to use this

13:41

tool well you have to learn how to use

13:44

it well which was also something very

13:46

true of object orientation people would

13:48

say oh objects you know and I you look

13:50

at what they were doing you're not using

13:52

objects very well in fact we kind of

13:55

mckent I were kind of at yeah they were

13:57

using C++ and Java they didn't actually

13:59

do the real stuff. Um, but the the point

14:03

was you have to be also listening to the

14:06

folks out there and being able to read

14:08

with a critical eye and getting a sense

14:10

of okay, if you do run across a Simon

14:13

Willis, is he hyping everything

14:16

wonderful or does he seem to be

14:19

recognizing real problems at the same

14:21

time and giving me straight stuff? And

14:23

that I found was a really I mean when

14:26

people give you that balance of good and

14:28

bad and also most importantly are

14:31

prepared to say I don't know then that

14:34

that involves something to listen to. So

14:36

him and also some of my colleagues in

14:38

Fort Works like Mike Mason and Bit

14:40

Berkel they really kind of showed me

14:42

that oh I've shouldn't be relying too

14:44

much on my initial reactions.

14:47

>> Yeah and it can change week to week. I

14:49

I'll I'll I'll I'll try something with

14:52

Gemini one week fails miserably.

14:57

This Gemini thing cloud code then that

15:00

works pretty well and then it's doesn't

15:02

work well and then I tried Gemini for

15:04

the same thing and it works this week

15:07

and it didn't work la last week. That's

15:09

a you know people want the answer and

15:12

the answer is changing. So you can't

15:14

possibly in this environment have the

15:17

answer. Now, that's the bad news. The

15:20

good news is nobody else has the answer

15:22

either. So, you're just as smart as

15:26

everybody else because you're just as

15:27

ignorant as everybody else.

15:30

>> Is that reassuring?

15:32

>> Is that reassuring? By a show of hands.

15:41

One thing that struck me as like a bit

15:45

of a similarity is back in 2001 when the

15:48

almost exactly 25 years ago when the

15:50

agile manifesto came out with that

15:52

website with all the 17 names listed and

15:54

Kent Beck being the first one. Why were

15:56

we the first one?

15:57

>> Alphabetical

15:58

strictly alphabetical but it is a source

16:02

of unending joy.

16:05

You can very much take of that.

16:10

>> That that that kicked off some really

16:12

interesting things in the indust

16:13

industry because what what my

16:16

interpretation was like well use this

16:18

agile here's these four pretty simple

16:20

easy to understand and easy to identify

16:22

with things to build better software

16:24

faster cheaper higher quality you name

16:27

it. Now, when I think of why so many

16:30

companies are adopting AI, they're kind

16:32

of expecting the same thing, better,

16:33

faster, cheaper, and so on. And so, I

16:36

wanted to can you reflect on how agile

16:39

actually went? Speaking of snake oil,

16:41

>> well, it it turns out that people don't

16:43

want faster, cheaper, better.

16:46

>> Tell me more.

16:47

>> Inside a company, the incentives are so

16:50

misaligned with actually achieving that.

16:54

And so so as as geeks trying to achieve

16:58

that and say, "Well, it's 40% better and

17:01

it's 12% cheaper and it's less

17:03

fattening."

17:05

Uh people will punish you for that if

17:08

that doesn't in align with their

17:10

incentives inside of organizations.

17:12

Yeah. In the ideal organization,

17:15

everybody would care about the same

17:16

things. And that's just not the way it

17:19

works.

17:22

But I

17:23

>> so and and we haven't touched that

17:25

problem. So if AI is coming along to

17:28

promise the same things, we're going to

17:30

see exactly the same reaction.

17:32

>> And and this is what I I wanted to ask

17:33

like looking back from what you've seen

17:34

for agile now 25 years and it played out

17:36

at a lot lot slow slower pace. What

17:40

similarities do you see right now with

17:42

AI? How do you think the curve could

17:45

fit? And also what is very different

17:46

about that agile movement and and that

17:49

took the industry by like a very slow

17:51

storm and now with AI.

17:54

>> Well, what's obviously very different is

17:55

the sheer magnitude and speed that h

17:58

that is hitting with AI. So that is

18:01

definitely different. I suspect one

18:03

thing I think there will still be some

18:05

similarities. Um one of them I think is

18:08

there will be a big difference between

18:10

people who use it well and people who

18:12

use it badly. And the trick is figuring

18:16

out how to use it well and putting the

18:18

effort in to learn to use it well. Um I

18:21

think there will be a big distinction

18:23

between those two groups. Um I think

18:26

another similarity is I mean the core

18:29

notions uh behind agile and extreme

18:32

programming are solid and good but a

18:34

huge snake industry appeared around it.

18:37

The agile industrial complex as I like

18:39

to refer to it. Um, and that will

18:41

happen. That is happening with AI right

18:43

now. And it's often hard to see the

18:45

difference between where is the snake

18:47

oil and where is the real stuff. And so

18:49

that's another thing that you got to be

18:50

constantly probing and be aware of and

18:53

be wary of as as you're looking at it.

18:55

>> Yeah. A AI is an amplifier.

18:58

And if uh if you're young and learning

19:03

quickly, AI is going to amplify that or

19:05

can amplify that. So I I personally

19:08

think this is this is the golden age of

19:10

the junior programmer. I get people

19:11

coming to me all the time, oh my son

19:14

started his second year in CS and he

19:16

wants to go into something more

19:18

commercial like art history.

19:22

And I I'd say this is like if you're a

19:25

carpenter and they just introduced the

19:29

the circular saw and you think ah well

19:32

carpentry is over. anybody can build a

19:36

house now.

19:38

Well, no, you have more powerful tools.

19:40

You you have less time that you have to

19:43

do, you know, kind of the crummy work.

19:47

So, I think that that's a the young

19:50

people who are learning fast are going

19:51

to learn faster. The experienced people

19:54

who are working effectively are going to

19:56

work more quickly and more effectively.

19:59

And my concern, and this is something I

20:01

learned last week, that middle, if we

20:06

look back at the.com crash, there was a

20:09

there was um also a middle of people

20:12

who'd gotten into programming because it

20:14

was a way to make money. And those

20:16

people went into real estate, more or

20:19

less.

20:20

And I don't know where the middle's

20:23

going to go now, because that middle is

20:25

much bigger now than it was 25 years

20:27

ago. But that middle has also been

20:29

flushed out to some degree by the

20:32

retrenchment in the software industry,

20:34

the end of the zero interest rate

20:35

period. Um, so that's an interesting

20:38

difference because we've had these two

20:40

things occurring at once, the AI boom

20:43

and the economic headwinds that we've

20:45

had in the last two or three years. Um,

20:47

so which is an interesting kind of mix

20:50

of things that wasn't the case back in

20:52

the '9s with the dotcom boom because

20:54

that was pretty much all solid boom.

20:57

>> Yeah. So, another interesting uh

21:01

confluence of factors is we have these

21:04

periodic we get to get rid of all the

21:06

programmers. Woohoo.

21:09

>> Cobalt programming

21:11

>> starting with Cobalt, right? When the

21:13

business analysts were going to be able

21:14

to write the programs and we didn't have

21:16

to have programmers anymore. And so that

21:20

comes back repeatedly. Agile was

21:23

definitely not that. We wanted

21:26

programmers to be more effective in

21:28

their jobs. And since we started it and

21:31

were programmers, we were able to push

21:33

that agenda pretty effectively. But now,

21:37

now we have this repeating

21:39

this repeating, hey, we get to get rid

21:41

of all the programmers, which it

21:42

behooves us as programmers to think

21:44

about why they keep wanting to get rid

21:46

of us.

21:49

Some of that's about us and some of it's

21:51

not, but some of it is. So we should

21:54

think about that but also that amps up

21:57

amps up the fear factor that everybody

21:59

is experiencing

22:01

>> and also one of the interesting things

22:02

is when people are say oh we're getting

22:04

rid of code I mean the you hear people

22:07

saying sessions oh no one's going to

22:08

write code in six months time I go to

22:11

myself well yeah but what do you mean by

22:14

code

22:15

because that kind of implies nobody's

22:17

writing anything well we're at least

22:19

doing some prompting we're having some

22:21

interaction with the genie

22:23

What's that going to be if it's not some

22:25

form of code in some way? I think the

22:28

nature of what code is is going to be

22:31

quite possibly very radically different,

22:34

but I think there is still a need to

22:36

produce it and be able to interact with

22:39

it in some way.

22:41

One thing I really like and respect

22:43

about both of you is you are two feet on

22:46

the ground. We've heard from OpenAI who

22:47

are a leading lab and of course they're

22:49

building amazing technology but they

22:51

also have to talk their book. We've

22:52

heard about Laura who talks with so many

22:55

people and so has so many good insights.

22:57

In the end, you know, she will have a

22:59

small bias to to help sell some of the

23:02

tools that that help do this. Martin,

23:04

you're talking with so many companies

23:06

especially like large skeptical

23:07

companies as well as well as startups as

23:10

through thought works uh consulting

23:12

advising. So do you Kent, what do you

23:14

see on the ground? What is interest and

23:16

and what is surprising you about how

23:18

these smaller and larger companies often

23:20

the the more enterprisey ones the more

23:22

traditional ones what are they doing

23:23

with the technology and how are they

23:25

thinking about it

23:27

>> um at the moment large scale confusion

23:29

and panic is pretty much the order of

23:32

the day right across the board

23:33

>> so so if that is your strategy you're

23:36

right in the middle

23:39

>> I mean enterprise I mean large

23:41

enterprises have this thing where they

23:43

just have enormous amounts of code in

23:45

complex systems that fit together in

23:47

difficult to together ways. Um where

23:51

someone says you know can can these

23:53

tools handle a million lines of code

23:55

because that that's a smaller code base

23:57

as far as many of these systems are

23:58

concerned and it's a very different

24:00

picture to the startup world because no

24:03

you do not want to take a risk that's

24:05

cause going to cause your airline to go

24:07

offline for a day or two. That's not an

24:10

acceptable um thing to consider. Um, and

24:13

also, um, there are other risks

24:15

involved. I mean, I I've now run into

24:18

several different groups, including some

24:20

at surprisingly large companies, that

24:22

are talking about, let's have the LLM

24:25

have complete control over my email. It

24:27

can read all my emails, and it can reply

24:30

to most of the emails. And I'm going,

24:33

NO,

24:34

>> WHAT?

24:35

>> NO.

24:36

>> I mean, the security risk of that is,

24:39

you know, mindboggling. Um but you know

24:42

it's a very it's I I am very much

24:45

concerned we're going to have some

24:46

really bad security incidents um over

24:49

this year because people are just not

24:51

paying attention and um those are the

24:54

kinds of things that are out there as

24:56

well. So there's a kind of blind rush to

25:01

uh say let's let's let's grab these

25:03

nicel lookinging things at the same time

25:05

as some real concerns that are coming

25:07

across it as well.

25:09

So I see a a a big trend is the

25:13

resoloing of programming

25:17

where

25:18

a big part of extreme programming is

25:20

creating a safe social environment for

25:24

basically antisocial people.

25:28

Not just asocial, antisocial.

25:33

And the when I think about the degree of

25:36

interaction on an XP team, people are

25:40

talking to each other hours a day and

25:43

happy to be doing so because it's set up

25:45

for that to be a positive experience.

25:48

What I see now is, well, I'm a

25:51

programmer and I've got six agents, so

25:53

really I'm managing a team. No, you're

25:56

not. You're using six tools at once,

25:59

which is fine, but it's that's very

26:03

different than having a conversation

26:05

with somebody who believes things that

26:07

are a little different than what you

26:08

believe or somebody who's got a

26:11

different energy level today than you

26:13

have. And but I I see the the the trend

26:18

is Oh, good. We used to have

26:21

programmers, you remember individual

26:24

offices? We we'd have offices and doors

26:27

and you shut the door and you slide the

26:28

pizza under.

26:31

>> It was a thing

26:31

>> and and that was that was easy to manage

26:34

and easy to control. And then along came

26:37

this messy, social, complicated, chaotic

26:40

process that just happened to produce

26:43

really good results.

26:45

And oh, that was uncomfortable. Oh,

26:47

good.

26:49

I can, you know, instead of having 50

26:51

people on my team, I have five people on

26:53

my team. They don't have to talk to each

26:55

other and they can each have 10 agents

26:57

and that's the same. It's not the same.

27:02

Yeah, that's the uh I mean it's part of

27:05

the question that that uh and again I

27:07

think something that was discussed last

27:08

week, you know, are we seeing that you

27:10

know two pizza teams are going to become

27:12

one pizza teams because agents don't eat

27:14

pizza or do we see two pizza teams

27:19

staying but just being able to much be

27:22

much more effective and capable and my

27:26

>> or or do you create a genie that can't

27:29

eat pizza? That's the one.

27:33

>> My bet is on the more effective two

27:35

pizza teams. Um, and it it's also some

27:38

interesting, you know, feedback we're

27:39

beginning to get in terms of pair

27:41

programming. I mean, with pair

27:42

programming, do you say pair programming

27:43

is the human and the genie? Or is it two

27:47

humans and n genies? Because if it's two

27:51

of us, we can control the genies perhaps

27:53

a little bit better. And we also have

27:55

that same interaction. And I'm I'm going

27:59

to find it very interesting to hear

28:01

reports of people trying that kind of

28:03

route where they're saying yes, we have

28:05

those pairs controlling genies um and

28:08

possibly beyond pairs. I mean we we

28:10

there's also the whole mob programming

28:12

thing and whether that will go a route

28:14

with that com combined with the genies

28:17

that combination I but I don't

28:19

necessarily think one person many genies

28:21

is necessarily the right answer.

28:24

>> It's the simplest thing to to

28:26

understand. It's the simplest framing,

28:28

>> but my my experience pairing with a two

28:32

humans and a genie or multiple genies

28:35

has been very positive. And the fact

28:37

that they're kind of slow is really

28:39

nice. So, every time the models come out

28:41

and they're faster, I'm like, "Oh,

28:43

there's less time to talk."

28:45

You give a prompt and it's like, "Oh,

28:47

well, blah blah blah." And then it's

28:49

gone for 3 minutes and we can talk about

28:51

our philosophy of naming or you know how

28:54

do we express conditionals or what other

28:57

you know what should we be doing next

28:59

and if it pops back in 15 seconds you

29:01

don't have time to have that

29:02

conversation.

29:05

So, a lot of things are are changing as

29:08

as a closing question before we we head

29:10

over to Q&A for software engineers who

29:13

really care about the craft and engine

29:15

leaders who care about the craft and you

29:16

know they they've learned to love this

29:18

industry. They're seeing a lot of

29:19

things. You're shifting for example,

29:20

you're not writing the code, you're

29:21

losing a lot of control. What advice

29:23

would you give to them to, you know,

29:25

stay stay afloat and and hopefully come

29:27

out thriving from this this change in

29:29

abstraction? Basically,

29:31

>> I I like to think of a comment again

29:33

that came up last week and I can't

29:35

remember who to who I shouldn't

29:36

attribute it to. Um, which is that the

29:40

uh the ven diagram of developer

29:42

experience and agent experience is a

29:44

circle.

29:47

And the point here is that you know what

29:49

we do that that's good for the agents is

29:52

good for the humans and vice versa. I'm

29:54

hearing a lot of feedback saying, "Yeah,

29:55

actually if you have well modularized

29:57

code that actually makes it easy for the

29:59

agents to work with and we've we're

30:01

already getting, you know, lots of

30:03

reinforcement saying actually focusing

30:04

on tests, good tests helps the agents as

30:08

well as helps us." So I think there's

30:10

there's a good bit of potential overlap

30:13

here. And again, this could be me again

30:14

just wishful thinking because I want it

30:16

to be true. Um, but uh I'm going to run

30:19

with it for a bit at least. And I think

30:21

so focus on those craft things and focus

30:24

on um using that and teaching the agent

30:28

as it were and working with the agent to

30:30

find find out how best to express that.

30:34

One of the things that I found really

30:36

fascinating was talking with another

30:38

colleague of mine, Unashi, about how he

30:40

was working in domains and he says the

30:43

way he finds working often with an agent

30:45

is to try and develop a language, a

30:47

precise language to communicate about

30:49

the domain with the agent, which is

30:51

basically the kind of model building,

30:53

language building, domain driven design

30:55

stuff that we're used to doing, but it

30:57

makes him more efficient to talk to the

30:59

agent. So those kinds of things give me

31:01

a sense of there's there's definitely a

31:03

huge overlap between what is good about

31:05

our practices and what will be good

31:08

continuing to drive with AI.

31:11

So I think for me I I take a kind of uh

31:15

OCD enjoyment in the in the craft and I

31:21

need to let go of that because that that

31:24

satisfaction of getting this one

31:27

function just right

31:30

just doesn't make a difference anymore.

31:35

Uh getting an overall understanding of

31:38

what's going on. And and I say this with

31:40

sadness because I really enjoyed getting

31:43

in the zone and you got some file and

31:46

it's a big mess and you make tiny little

31:49

safe steps and you don't know quite

31:51

where it's going and then you start to

31:53

get a glimmering and then it's there and

31:55

then oh pop it just pops into focus and

31:58

it oh that feels so good and I can't do

32:01

that anymore.

32:03

But I can still develop an overall

32:06

understanding of what I'm doing. And I

32:09

need to shift my focus to enjoying

32:13

understanding the domain and its

32:16

connection to my program in a way that I

32:19

used to be focused on the program as the

32:22

domain and I could make that better and

32:25

better. It just doesn't have leverage

32:27

anymore.

Interactive Summary

Software engineering pioneers Martin Fowler and Kent Beck reflect on the legacy of the Agile Manifesto and the transformative impact of AI on the industry. They compare the current AI shift to major historical milestones like the microprocessor and the internet, highlighting AI's role as an amplifier for developer productivity while cautioning against the security risks of autonomous agents and the potential loss of social collaboration in programming. The discussion emphasizes that while the manual craft of writing code is evolving, core principles such as testing, modularity, and domain understanding remain vital for effectively working with AI.

Suggested questions

5 ready-made prompts