HomeVideos

Nicole Forsgren: Leading high-performing engineering teams in the age of AI - The Pragmatic Summit

Now Playing

Nicole Forsgren: Leading high-performing engineering teams in the age of AI - The Pragmatic Summit

Transcript

941 segments

0:09

Nicole, it is so nice to have you here.

0:12

Last time that you and me talked in in

0:14

more long form in a way that a lot of

0:16

you could enjoy was was on the podcast.

0:18

And back then, Frictionless was not yet

0:20

out. You were working somewhere else.

0:22

Now, Frictionless is out.

0:24

Congratulations. You're now doing a fun

0:27

and exciting job at Google. Can you tell

0:29

us a little bit of what you're what

0:30

you're up to these days and what keeps

0:32

you up at night?

0:33

>> Oh my gosh, how much time do we have?

0:37

Um, very similar work, right? Like how

0:40

can we think about improving the way

0:43

people build software? How can we think

0:44

about I love that, you know, Laura

0:45

mentioned this this morning. If if we

0:47

can't call it developer experience, just

0:49

call it agent experience and then it's

0:51

all going to work. um thinking of ways

0:54

that we can make agents smarter and

0:56

better so that we can work with them

0:58

better. Um how do we measure really hard

1:01

things like productivity because you

1:03

know Martin mentioned in the last

1:04

session um the measurements were already

1:07

like kind of bad and now they're like

1:09

extra bad. So, so how can we find ways

1:12

because like on the one hand we don't

1:14

love a productivity metric because it

1:16

can feel like an attack but if we have

1:17

nothing right if this is just like vibes

1:22

I'm sure we've all been in a meeting

1:23

with a director or a VP or something

1:26

where they like just have a gut feel

1:28

this is just how we should go you know

1:31

they don't seem to be super open to it

1:33

if agents just go on gut feel so so

1:36

having some kind of signal is helpful

1:39

>> one thing that that struck me about your

1:40

your book when the very either the very

1:44

beginning or or or the back cover it

1:46

says that AI is helping us create

1:48

software faster than ever and yet

1:51

delivery or like shipping is just still

1:54

so slow. How do these things these two

1:58

things go together? What what is

2:00

happening in between? So I think there's

2:02

a few things, right? One is that we all

2:03

started focusing on Gen AI and the

2:05

coding, you know, that interloop because

2:07

we can see it and that's where like all

2:09

of the dopamine hit comes. That's where

2:11

it's all very exciting. And then as we

2:14

go to ship, we've had systems that like

2:15

we already knew they could probably be

2:17

improved, but like it was fine, right?

2:20

There were a handful of people probably

2:22

managing a security review process or a

2:24

launch process or a deployment process

2:26

or, you know, sometimes reviews were a

2:27

little slow and they got backed up.

2:29

Well, now we just threw gas on the fire

2:31

and so all of that is a problem. And so

2:34

what we're doing I I liked how you know

2:36

Tibo mentioned it this morning. Now

2:38

we're kind of chasing those constraints.

2:40

We're chasing the bottlenecks in a way

2:41

that it's much more obvious than it was

2:43

in the past. And so like in the

2:45

immediate term, yeah, we were getting

2:47

more out, but now our systems, whether

2:49

technology systems or human systems or

2:51

processes are really kind of getting

2:53

overwhelmed.

2:55

>> Do you have some some specific examples?

2:58

We don't need to name specific companies

2:59

but like a thing where like oh you know

3:01

now they're using all these AI tools but

3:03

these are things that are slowing them

3:04

down.

3:05

>> There are a handful of things. So uh and

3:08

it you know it kind of changes depending

3:10

on where they are in the process. Review

3:12

ends up surfacing quite a bit right

3:14

because we're just we're putting so much

3:17

work on it. And not only that but like

3:19

humans were already a bit of a

3:21

bottleneck in the review process. Now it

3:23

can be worse because things that we

3:25

fairly straightforward changes that some

3:27

companies had automation around

3:28

reviewing they've removed that reviewing

3:30

because if AI is involved and they're

3:32

worried about the verifiability or the

3:34

reliability of the code and so now that

3:36

review burden has shifted. I'm also

3:38

seeing quite a bit uh I still talk to,

3:40

you know, a handful of companies. We're

3:42

we're seeing quite a bit in that

3:44

deployment and release process, right?

3:46

That's

3:48

kind of empty black box for a lot of

3:50

folks who like don't know how the

3:51

sausage is made. But so many times that

3:54

process has been managed by humans

3:56

because you're selecting the right

3:58

candidate build and you're verifying it

3:59

and you're thinking, you know, you're

4:01

figuring out cherry picks and then you

4:02

like rebundle and then you send it out

4:05

and that doesn't scale. if you have one

4:08

or two or a handful of people trying to

4:10

make group decisions and do group

4:11

sensemaking

4:14

>> and in in in the book I I realize the

4:16

book came out before Opus 4.5 but it

4:19

described this scenario which seems like

4:21

really alien but obviously it's based on

4:23

a true story that there's a new hire

4:24

joining a company uh using AI tools this

4:28

person turns out her first contribution

4:31

and then for I think two or 3 weeks it

4:33

sits there because the the code review

4:36

didn't flag it she didn't have access

4:37

the database. Can you tell me a little

4:39

bit about like some of these things?

4:42

Like a lot of us, some people sitting

4:44

here are actually working at startups

4:45

where like it's just common thing to

4:46

like go from like deployment to shipping

4:48

pretty quickly. How

4:51

common are are these things which are

4:53

like just surprising people of like oh

4:55

these things are being stuck. People are

4:56

just fiddling in your thumb thumbs for a

4:58

while and do you see this like being the

5:01

same? Do you see because of AI pressure

5:03

being on on removing these things and

5:05

recognizing them? what what are trends

5:07

that you're observing?

5:08

>> I think one thing I'm saying that

5:09

probably won't surprise a bunch of folks

5:11

here is organizations are still going to

5:14

organize, right? So like when you've got

5:16

like some review process and we wait two

5:18

weeks or like someone the one person has

5:21

to sign off on it but that person's like

5:22

oof or like all of the the things that

5:26

we have structured process around to try

5:28

to make things more uniform are often

5:30

the things that slow us down. So again,

5:33

while we're kind of speeding up that

5:34

interloop, and now we're starting to see

5:35

agents do more around uh you know,

5:38

reviewing and and a handful of other

5:40

tasks,

5:42

a lot of companies haven't started until

5:44

now thinking about how we could apply AI

5:46

to the very human very business process

5:50

part of it. And so that will keep

5:53

slowing us down until we find a way to

5:54

to address it, right? And a lot of that

5:56

comes with when I first start, you know,

5:58

I get database access and not having

6:00

database access for 2 weeks has like

6:02

historically usually been fine. Maybe

6:05

not great, but like you not like 90% of

6:07

the time it was fine. Well, now when you

6:09

can be committing code on your first day

6:11

in ways that the company wasn't

6:13

necessarily structured for, right? Like

6:15

Etsy famously, you know, you you would

6:17

commit code on your first day, but they

6:19

knew that was coming. All of these other

6:20

companies don't know that's coming. I

6:23

knew of one or two cases with an intern

6:25

where because of policies and a couple

6:28

like uh supply chain snafoos, they

6:30

didn't get their laptop for like two

6:32

weeks. So, they were on a loner. They

6:35

had committed a lot of code before their

6:36

laptop showed up and like no one in the

6:39

system. There was like one particularly

6:40

secure thing they were working on. They

6:42

couldn't figure out how to make that

6:44

work because it didn't match the source

6:47

didn't match where they thought it

6:48

would. And so I think we're really

6:50

seeing kind of an emphasis and a

6:52

spotlight on the things that were kind

6:54

of fine before and it's that friction

6:56

that really slows us down now.

7:00

>> When one thing you've been like so good

7:02

at and I paid so much attention to your

7:04

work and I think it's influenced myself

7:06

and a lot of other people in the

7:07

industry is is measuring how we can

7:09

measure these very hard to measure

7:10

things. And one of the the latest you

7:14

you went through a lot of iterations. We

7:15

have Dora, we have space, we have the

7:17

DevX framework and and more. And in the

7:19

DevX framework, this was pre AAI. Can we

7:22

talk a little bit about what the DevX

7:23

framework is? And then like one part of

7:25

it is cognitive load, how that feeds

7:27

into AI.

7:28

>> Yes. So there are many ways to think

7:30

about uh developer experience, but one

7:32

that I find kind of useful is there

7:34

there are three pieces that kind of fit

7:35

together. So there's flow state, there's

7:38

cognitive load, and there's why am I

7:39

forgetting the last one? Laura,

7:41

>> feedback loop.

7:41

>> Feedback loops.

7:43

>> I'll look at Laura. Don't worry about

7:44

Thanks, Laura. Um, and they all kind of

7:48

support each other, right? Because when

7:50

I'm in the flow, the feedback loops are

7:52

really important. If I have to wait 20

7:54

minutes, if I have to wait a week to get

7:56

something an answer, uh, a question

7:59

answered or a review back, then I break

8:01

my flow. Um, that makes it harder for

8:03

cognitive load as well. So, cognitive

8:04

load is basically like the work that our

8:06

brain needs to do. And there is some

8:08

inherent level of cognitive load in

8:10

something we do, right? So something

8:11

that's difficult is going to take more

8:12

brain power, but things that are easy

8:15

easy should not take brain power. But

8:17

like sometimes reampramping into a

8:19

codebase when we haven't been into it

8:21

for a while, that's higher cognitive

8:22

load. And so if I'm already there, I can

8:24

get a bunch of that work for free. Or

8:26

anytime I have to deal with like a

8:28

really arcane process and go through a

8:30

hundred steps, it's easy because it's

8:32

straightforward, but it takes a lot of

8:33

work, right? And that's where you know I

8:37

thinking about the human can be really

8:39

helpful because you know it was called

8:40

out in a couple talks earlier what's

8:42

good for humans is good for systems. If

8:44

I have well structured code if I have

8:45

well structured documentation if I have

8:48

uh APIs that are like cleanly defined

8:51

and I I know what those interfaces look

8:53

like it can be really helpful. And then

8:56

I will say it's it's kind of revisiting

8:58

this question of now of how do we want

9:00

to think about cognitive load because I

9:03

want to say Gloria Mark has done some

9:05

really incredible work on focus and

9:07

humans max out at about like 3 to four

9:09

hours a day like really really hard deep

9:12

work right um which always makes me

9:15

laugh when exact like we need eight

9:17

hours of intense work and I'm like not

9:18

with humans that's our brains don't do

9:21

that and so now when we have these three

9:23

or four hours how can we use them best.

9:25

And what does it mean when we're working

9:28

with AI and with agents? Because for

9:31

some of us, um, good deep work means I

9:34

block my calendar and I can get really

9:35

embedded and I can do one thing and I

9:37

can think really hard. And now a lot of

9:39

the models are very interruptive, right?

9:42

I'm getting pinged all the time. And so,

9:44

how can I change the way I work or how

9:47

can I think about managing my own

9:49

cognitive load? How can we think about

9:50

it more broadly in organizations knowing

9:53

that the nature of the work that we're

9:55

doing in many times has really changed

9:58

>> and one interesting thing I see with

10:01

with agents and all of you will be

10:02

seeing it is the feedback loop is faster

10:05

right you tell it do this and it comes

10:06

back especially some models like cloud

10:08

code are very good at doing that and

10:10

then you start to get really tired

10:12

because in using your terminology like

10:14

the cognitive load increases by faster

10:17

feedback loops and it seems so

10:19

counterintuitive Because until before

10:20

AI, we're always we're all about

10:22

iteration, fast feedback loops. We we we

10:24

were never ever close to having these

10:26

fast feedback loops. So what what what

10:28

is happening? Is this a net good? Is

10:30

this a net bad? Is it it's is it good

10:32

that we're having faster feedback loops,

10:33

but is it bad that we're having more

10:35

cognitive load or how are like it it

10:37

feels like such a contradiction.

10:39

>> I think it's just different, right? So

10:41

fast feedback loops were good before

10:43

because if I for example, if I had a

10:44

question about a library and someone

10:46

could get back to me, then then I could

10:47

continue on, right? like there was a bit

10:49

of a a pause, but I kept going. Well,

10:51

now I'm getting feedback so quickly that

10:55

I'm having to sometimes rebuild my

10:57

mental model dozens of times in like a

11:00

30-minute period. And so it's not just

11:02

it getting fast feedback is good, but if

11:04

it's faster than I know how to keep up

11:06

with or if it's interrupting because,

11:08

you know, sometimes they just want to

11:10

inject text when I am not ready for them

11:13

to do that completion. And so, you know,

11:16

I think some of that is, you know,

11:18

sometimes I'll just turn it off because

11:20

I just like need to write for a second

11:21

and then I'll let it review. So, it's I

11:23

will say right now like a lot of this is

11:25

kind of an open question. PE people are

11:26

starting to look at it, but also what

11:28

the environments were like 6 months ago

11:30

is very different than what they were

11:32

like now. So, this is kind of evolving.

11:34

It's interesting when you said like you

11:36

turn it off because I was talking with

11:37

Michel Hashimoto uh founder of of Hashi

11:40

Corp about a week ago and he was telling

11:41

me that his workflow is he has an agent

11:43

always side by on the side that usually

11:46

runs but he turned off all notifications

11:48

because he only wants to go when he is

11:50

ready and he kicks it off with something

11:51

and he doesn't care when it finishes and

11:53

I feel we might be like people might be

11:55

starting to discover their working style

11:58

on on what works and what doesn't. But

12:00

speaking of flow state in in you know

12:03

like being in the flow it it used to be

12:05

amazing as an engineer I I used to be

12:07

really efficient in the flow and now

12:08

with with AI you can also kind of get

12:10

into flow maybe a little bit easier. In

12:12

your book you did mention something

12:14

interesting and this is about the

12:15

tooling but you said flow state does not

12:17

only depend on tooling. You specifically

12:18

said how things like psychological

12:21

safety project ownership uh how

12:24

technical decisions how how much

12:26

autonomy you have all those depend. How

12:29

do you see this changing with AI where

12:31

like the tooling seems to be really good

12:32

at getting in the flow? But could could

12:34

we see that people are actually still

12:35

struggling to get into the flow because

12:37

they're lacking a lot of those things?

12:39

>> Uh well, tech is easy, people are hard.

12:43

And so, you know, sometimes getting the

12:45

flow really is about understanding what

12:47

I'm doing, having very clear direction

12:49

and goals and and knowing what my what

12:51

my work is doing so it's well scoped,

12:53

but I also not just have a well scoped

12:56

feature or something, but I I know what

12:58

the purpose of it is so that I can make

12:59

informed decisions. Some of it is having

13:02

that psychological safety so that I know

13:04

that I can take a risk on something or I

13:06

can ask someone on my team. And you

13:09

know, Kent mentioned when we he was

13:11

calling AI the genies. When we're

13:12

working with the genie, that's not the

13:14

same thing. We might have a handful of

13:15

genies, but that's different from having

13:16

a handful of friends, right? In part

13:18

because the energy is different. The

13:20

conversations are different. Also, they

13:22

just agree with us constantly.

13:25

I'm always so smart and I'm like, I know

13:27

that was dumb. I know that was very

13:30

dumb. And so, and I will say I do my

13:32

best work. Like, I I can't think of any

13:36

paper as one example. paper or book I've

13:38

written on my own because many times I

13:40

get them started and I'll write most of

13:41

it and then I'm like I really need

13:43

someone to tell me like where a hole is,

13:45

where am I dumb, what am I missing, what

13:48

makes perfect sense in my head, but does

13:50

that make sense to them when I say it?

13:52

And like our our AI tools and agents

13:54

just aren't there yet. Sometimes they

13:56

guess really well and sometimes they

13:57

guess in a completely orthogonal

13:59

direction. But on on that one, do you

14:01

want to tell a story about the

14:02

frictionless the book when you started

14:04

writing it and then you you went back

14:06

and I think you deleted a good part of

14:08

it, right?

14:10

>> Listen, I was a software engineer for

14:12

years and then I was a researcher and I

14:15

was writing a bunch. Researchers write a

14:17

very particular way. There's a lot of

14:19

detail. There's a lot of background and

14:21

on page like 105 you get to the point.

14:26

And so I had started it. I was maybe

14:28

working with someone like we kind of

14:30

chatted about it but I get through this

14:32

whole section of the book and I realize

14:33

there's I've created several chapters of

14:37

basically like how to do research when

14:39

you're not a researcher like how to

14:41

write good survey questions how to talk

14:42

to people so you understand it was

14:44

incredibly like detailed and easy to

14:46

understand and 100 pages that no one

14:49

needs to read ever no one is going to

14:51

read this and so I just like tossed it

14:53

and reached out to Abby and I was like

14:55

do you want to write this book I think I

14:57

have an idea of the direction I'm going.

14:58

Also, tell me if I get in a rabbit hole

15:00

cuz it made sense and it was right.

15:02

Also, no one's going to no one wants to

15:04

read that. And we ended up turning it

15:06

into workbooks in the back, which were

15:07

great because then it's like fill in the

15:09

table, check the thing, make it really

15:11

useful and actionable.

15:13

So that I I'm not great at I'll that's

15:16

my problem with flow state is I'll get

15:17

in a flow and I'll write for the wrong

15:19

audience or I'll write at the wrong

15:22

altitude or I'll code at the wrong like

15:26

level of specificity especially with

15:27

agents right sometimes I'll just give it

15:28

something broad I'm like that really

15:29

should have been more detailed but by

15:31

the time I find that out we're an hour

15:32

in and

15:34

but I I I wonder if if one takeaway

15:36

might be that effort is not gone wasted

15:39

right you spent a lot of time in flow

15:41

state writing let's say the wrong and

15:43

and learning and then the end result was

15:45

something special something that would

15:46

have not happened if you would have

15:47

so-called oneshotted it we talk about so

15:49

much of oneshotting things do you think

15:51

like we might have to really relearn

15:53

that effort and and wasted energy even

15:55

as a human when agents can do infinite

15:57

of these things it might be helpful for

15:58

us

15:59

>> oh I agree you know one is being able to

16:02

clearly articulate the problem or the

16:04

thesis or the idea without trying I

16:07

don't get there right um and that's true

16:11

of so many things I think It's both in

16:13

terms of like learning things and kind

16:15

of getting your hands around them. But

16:16

one open question that I think is really

16:18

interesting to me is back when I was

16:19

doing more coding. I kind of had a feel

16:22

for the system because I was coding it

16:24

all the time. Did I know the system? No,

16:25

it was huge, right? But I knew my kind

16:28

of I could whiteboard it reasonably well

16:31

and now we're coding and things are

16:33

changing so rapidly. I think there's a

16:35

really interesting open question around

16:37

how can we help support and build these

16:39

mental models not just in a way that

16:40

reduce cognitive load or improve flow

16:41

but help us understand our systems

16:44

whether it's like for me I'm a visual

16:46

person so I years ago when it was first

16:48

coming out I was asking it for mermaid

16:49

diagrams all the time right like I I

16:52

needed to see what it kind of I needed

16:53

my I needed it to whiteboard with me and

16:56

I think that'll be different for for

16:58

everyone else but without taking that

16:59

time like we b our brains just work that

17:03

way right our brains just work that that

17:04

way better.

17:05

>> So, we talk about taking the time and we

17:07

want to understand it's a long large

17:09

change, but this one's a question for

17:11

everyone in engineering leadership

17:12

position who is being hammered by their

17:14

CEO and board and all of these things

17:16

saying all right we're paying a bunch of

17:18

money for this stuff and I already see

17:19

that not not how do we measure it and

17:22

they're going to ask you like you all

17:24

been with this excellent conversation

17:25

with Nicole and what did Nicole say what

17:27

metrics should we measure and like can

17:31

we actually talk about honestly of of

17:32

like where we are what what can work and

17:35

how you think about this and you must

17:37

get this so much.

17:38

>> That is my job. Um, it depends.

17:44

It is always it depends. Uh, we can all

17:46

go into consulting now and just make a

17:48

ton of money like the check. Um, it it

17:52

really does though. It depends on what

17:54

question it is you're trying to ask,

17:56

right? So, when someone says, "Am I

17:57

being more productive?" Then I will say,

17:58

"What do you mean by productive?" I know

18:00

it when I see it. What what shape does

18:02

it take? Right? like code smells have a

18:04

thing, productivity smells have a thing,

18:06

right? And sometimes it's lines of code

18:09

or like PRs or something. And I'm like,

18:13

okay, so what does that what do you

18:15

learn from that,

18:17

right? Does that help you

18:20

get a feature out to a customer faster?

18:22

And sometimes they're like, well, yeah.

18:23

And I'm like, okay, is it the right

18:24

feature? Do we know it's the right

18:26

feature? And and what kind of part of

18:29

that endto-end process are we

18:32

amplifying? do you also want more ideas

18:34

and more lines of code and more reviews

18:37

and more more all everything and they're

18:40

like well no I'm like I I'm just asking

18:42

questions right and so I think that that

18:46

can help now what are we using to

18:48

measure productivity now I will say it's

18:50

evolving right so uh space framework

18:53

ends up being really helpful so space is

18:56

satisfaction

18:57

uh how satisfied you are with the thing

19:00

uh performance right what's an outcome

19:02

whether it's quality or something. Uh

19:04

activity is account that's the anything

19:07

you can count. Uh C is collaboration and

19:10

communication which can be between

19:12

people which we're seeing evolving. Um

19:15

or between systems. Uh and then E is

19:17

efficiency and flow. So that can be like

19:19

if we're in a flow or it can be the time

19:21

just the time it takes to get through

19:22

the system, right? And I know I've heard

19:24

a couple of people say here that, you

19:25

know, everyone's talking about velocity

19:27

and they want things to be faster and

19:28

they care about velocity. And I'm like,

19:32

I hear you. Yes, that can be good. What

19:35

are the guardrails that you want to put

19:36

in place? Right? How do we want to think

19:38

about quality? How do we want to think

19:39

about satisfaction? How do we want to

19:42

think about whatever? Because

19:45

if we just brute force it, something's

19:48

going to break, right? and and there are

19:50

ways where we can make informed

19:52

decisions. So I've worked with teams who

19:55

I will say there uh there was a question

19:56

in one of the other sessions about

19:58

sacrificing quality for speed.

20:01

Some teams can and but when they do it

20:03

they're doing it very very

20:04

intentionally. They don't say they're

20:05

sacrificing quality. They say they're

20:07

making a riskbased decision. What this

20:10

is fair though they're running a rapid

20:11

experiment. They want signal really

20:13

really quickly and if they can get an

20:14

experiment out in an hour and they can

20:17

run it against some very small

20:19

percentage then they're willing to take

20:21

the risk of lower latency or a crash or

20:25

something for that very small percentage

20:26

and then they back it out and then they

20:28

get an answer. So like I think with the

20:30

right metrics in place it really helps

20:32

us make those risk based decisions

20:34

versus all fast or all slow. And I do

20:38

still see some teams that are like all

20:39

slow because they they just want to pump

20:41

the brakes. I you know some some folks

20:43

in like security

20:47

which is understandable but it's like

20:49

now they're kind of on fire right their

20:52

feet are on fire because there's just so

20:53

much to do. Oh yeah, especially when the

20:56

rest of the visits out. We we talked

20:58

about yesterday we had a small event and

21:00

we talked about how David Kramer from

21:03

Sentry was talking about how a lot of

21:05

non-developers are getting access to

21:06

cloud code and they're loving it and

21:07

they're so productive and oh this might

21:10

have been actually someone from a larger

21:12

publicly traded company which is I I

21:13

won't name them but one of the business

21:15

developers like created this like

21:17

awesome tool to I think look at sales

21:19

proxies and all that and then

21:20

accidentally made it available to the

21:22

whole world and they caught it in time

21:24

but now now there's a lot of those folks

21:26

so and and I think David Kramer from

21:27

Centry was saying like, "Yeah, like we

21:29

we have this like annual training where

21:30

developers go and they go kind of a

21:32

yawn, but we will need to make this a

21:34

lot more interactive and engaging and

21:36

everyone in the business will have to

21:37

go." So like there's going to be this

21:38

fun challenge. So now sounds like it's a

21:40

good time to be in security.

21:42

>> It really is. Well, and because now

21:44

there's also kind of evolving I don't

21:46

say evolving definitions of security,

21:47

right? Like something's kind of secure

21:49

kind of not, but what are the signals

21:50

that we're looking for? What are the

21:52

levels of security that are important?

21:53

There are even some good questions

21:55

around, you know, with some of the

21:57

regulations in certain countries, you

21:58

had to have at least two people review

22:00

the code before it can deploy. What does

22:03

that mean, right? Are there ways that we

22:05

can revisit some of that now? And there

22:07

were some improvements made over the

22:09

last changes, improvements um over the

22:11

last decade or two so that if you passed

22:14

a set of automated checks and tests,

22:16

then that would count as one person,

22:17

right? Well, now it's two. Well, what if

22:18

what happens when we have agents now,

22:20

right? And so I think some of this will

22:22

be important for us to kind of discover

22:24

and think about really creative ways to

22:26

solve the problem in ways that are

22:27

meaningfully

22:29

consistent and also educate the rest of

22:32

not just the industry but regulatory

22:35

fields right

22:38

today if you're a VP engineering sitting

22:40

in this this group and you are in the

22:42

process of rolling out all these AI

22:44

agent from cloud code it might be codeex

22:46

it might be other vendors we we

22:49

mentioned the importance of measuring

22:50

measuring things. What would your

22:51

suggestion be? Specific things that you

22:53

can and probably it's not harmful. It's

22:55

probably helpful to measure already at a

22:56

tactical level. And how would you come

22:58

ac

23:02

you have the right data? You're you're

23:04

not you're thinking about like not

23:06

necessarily invading too much of

23:07

developer privacy and not collecting

23:09

junk data.

23:11

>> It depends. Um a lot of it kind of so I

23:15

I will say I tend to start with

23:16

adoption. I am not a fan of an adoption

23:18

metric. I don't like it. But also devs

23:21

are like a gloriously cranky bunch. We

23:24

are not going to use tools that are

23:26

awful unless we absolutely like if it's

23:28

the only option. There's almost no other

23:30

option. I'm going to sound old when I

23:31

say this, but like I one time had a

23:33

company tell me, oh well they have to

23:34

use that CI/CD system. I'm like 20

23:36

bucks. They're just spinning up Jenkins.

23:39

And they were right. And so I think

23:41

adoption can give us some early signals

23:45

in part to satisfaction

23:48

because if a tool is awful then they

23:49

won't use it. And if we aren't engaging

23:51

with a tool so then we can look at

23:52

engagement. If we're not engaging with

23:54

it then we can't understand right like

23:56

we don't know what the capabilities are.

23:57

We don't know what like how to kick the

24:00

sides. Um you know we might love it

24:02

immediately and decide it's amazing and

24:04

then later find out it's what its

24:06

weaknesses are. We might hate it and

24:07

never go back. But I think that can

24:09

help. Engagement is another one which is

24:11

how much are people using it and for

24:13

what kind of tasks and so there's some

24:15

tooling that uh and I know earlier

24:17

studies found that you know for fairly

24:19

straightforward work it gets used quite

24:21

often right and so we can also watch how

24:23

people are kind of using that now I'll

24:25

come back to it depends right what is it

24:26

that you're going for as a hypothetical

24:28

VP of right do you want people using it

24:31

do you want them to get faster because

24:34

everyone talks about faster right and

24:35

then what do you mean by faster is it

24:37

the interloop coding part is it features

24:39

end to end because then you have to take

24:40

a much more holistic look at the whole

24:43

system. Um especially if we're talking

24:46

about some like magical agentic future

24:48

where they're all self-driving. But

24:50

that's that's another metrics rant. And

24:54

outside of just measuring, one thing

24:56

that I heard is an interesting approach

24:58

is giving explicit permission. Uh

25:02

Rajiv Rojan Atlashian CCO who will be

25:05

our our speaker in the next one. He at

25:07

last and he sends a message telling to

25:09

everyone for 10% of your time you have

25:11

my explicit permission to experiment

25:13

with these systems and just see how they

25:15

work. How do you see these kind of

25:19

approaches which feels a little bit top

25:20

down but it also I guess creates a bit

25:22

more safe space. Do you see this being

25:25

useful in general for new technology or

25:27

especially right now?

25:29

>> It's

25:31

I think it's important in general right

25:33

it's basically like comms and change

25:34

management. It's like the really old

25:35

school stuff. I think it's especially

25:37

important now though because there's so

25:39

much fear and risk and unknown around

25:41

using AI tools. Will I be fired for

25:44

using them? What if I make a mistake by

25:46

using them? And so, um, I'm seeing

25:48

across at least a handful of companies

25:49

that explicit exec sponsorship makes a

25:53

huge difference in not just using them,

25:55

but trying new things and feeling safe

25:58

to fail within, you know, kind of guard

26:00

rails. And I know, you know, for years

26:02

there have been places where like if you

26:03

take down all abroad, you get some kind

26:05

of prize.

26:07

Without taking that to its extreme, that

26:09

can also be helpful because they're

26:11

helping pressure test the systems that

26:12

we work in right now.

26:15

>> In your book, which is about again

26:18

removing friction, h having ways to like

26:21

move better, faster, etc. in the in

26:23

towards the end there's a whole chapter

26:25

on

26:27

um on self-support on the chapter is

26:30

called support yourself through

26:32

challenging work can you talk us tell us

26:34

about why you wrote a whole section on

26:36

it and just advice on how folks can

26:39

support themselves how you see either

26:42

your supporting yourself or how you see

26:44

peers getting through this like pretty

26:46

intense time.

26:47

>> Yeah. So the context of that last

26:49

section was uh supporting organizations

26:52

through change, supporting your teams

26:53

through change and supporting yourself.

26:55

And it was interesting because when I

26:56

had been I interviewed like a handful of

26:59

many several handfuls of engineering

27:01

leaders as we were talking about some of

27:02

this and more than a few of them said it

27:05

was really important for them to not

27:07

just support their teams you know

27:08

provide executive formal support for

27:10

using new tools and systems but also

27:12

themselves because anytime you're going

27:15

through any kind of change whether

27:17

you're you know kicking off a brand new

27:18

DevX initiative or you're rolling out AI

27:22

especially now when everything is so

27:25

there's a lot that's unknown there.

27:27

These are really hard problems, right?

27:29

And so having a couple folks that you

27:31

can talk to, your own uh I want to say

27:33

Rose Whitley said uh you should have

27:36

your own board of directors because then

27:39

we can bounce ideas past people. We can

27:41

safely say

27:43

what is happening. I have to go I have

27:46

to go to an exec review and I need to

27:48

have an opinion and like I understand

27:50

half of this like can you talk this this

27:53

through with me and I think that also

27:55

helps with burnout right because burnout

27:57

we know you know Christine Maslac has

27:59

done some some really great work where

28:02

um burnout is a combination of things

28:05

it's working too hard right but that

28:07

actually isn't burnout that's just like

28:09

getting tired another piece that's super

28:12

critical to burnout is not having your

28:13

values aligned

28:15

And so sometimes I have found that

28:17

talking through people and others who

28:18

told me the same is kind of

28:20

understanding where your values are, if

28:22

your values align and then if they

28:24

don't. And many times they found that

28:25

they did align and it sort of like

28:27

relieved some of that pressure that they

28:28

were under.

28:31

>> And finally looking ahead um in two to

28:35

three years time how would you envision

28:37

a more or less frictionless organization

28:40

operating a company where like takes

28:42

this really seriously? they are adopting

28:44

AI tools. They're like, "All right,

28:46

let's remove the friction points." How

28:47

how would that look like? And if you're

28:49

sitting in this room today and you're

28:51

you want to walk away and you want to

28:52

start doing something this week, where

28:54

would you start on top of course getting

28:57

the book and reading it?

28:59

>> Uh workbooks are also free online. You

29:01

can go find those. Um so a couple of

29:04

things for the kind of frictionless

29:07

future. I'm I'm a metrics person, so my

29:09

answer is going to be about data. Um,

29:11

but I think, you know, I've I've been

29:14

having conversations with folks for a

29:15

handful of months now around if we think

29:17

there's this future world where agents

29:19

can self-drive and self-improve and they

29:21

can do all the things and our

29:22

organizations run better. For that to be

29:25

true, right? So, that's going to be

29:26

true. Maybe, maybe not, but it's a

29:28

stretch. For that to be true, agents

29:32

need to be able to see and understand

29:33

the system and agents need to be able to

29:36

improve the things that need fixed. For

29:38

that to be true, humans need to be able

29:40

to see and understand the system and

29:42

then take action to fix it. Uh and for

29:44

that to be true, we got to see the

29:45

system, right? Particularly when when

29:48

we're moving really really quickly.

29:49

Right now, humans are a stop gap, right?

29:52

We'll talk to people, we understand the

29:54

system. I I just know that when there's

29:57

a problem over here, it's like usually

29:59

about the build, right?

30:03

Agents aren't going to be able to do

30:04

that. Or if they are, like we probably

30:05

don't want that. And so how can we think

30:07

about ways to easily and cheaply surface

30:10

some of the signals that can help us

30:12

make decisions? And it doesn't have to

30:13

be super heavyweight. Although like

30:15

agents can also probably help us build

30:17

some instrumentation that's pretty good,

30:19

right? So how can we how can we think

30:21

about ways to first of all identify what

30:23

are the touch points that we care about?

30:24

Where are the signals that we want to

30:26

see? How can we make it cheap and easy

30:27

to get a hold of those? And then how can

30:30

we kind of sense make around them and

30:32

and realize that's going to change,

30:33

right? Like there's several phases in

30:35

like writing software. There's like

30:37

having the idea and coming up with the

30:38

design and then coding it. And then

30:41

right now that whole front end's been

30:42

like kind of smooshed because many times

30:45

we can just like prototype really really

30:47

rapidly and kind of solidify some of

30:50

what we're thinking in terms of like

30:52

ideas and coding. So I fully expect that

30:58

part of the outer loop is just going to

30:59

be collapsed as well, right? Because

31:00

we'll find more efficient ways to do

31:02

that. But it's going to be helpful if in

31:04

the interim we know where some of those

31:05

touch points are, right? Like what are

31:07

we looking for? What are the quality

31:08

gates? What are the signals that show us

31:10

something is working well or not? And

31:12

then if we collapse then where do those

31:14

signals shift to or do they get to

31:17

disappear?

31:18

>> Yeah. And it feels to me that with so

31:19

much change coming, one thing that feels

31:21

really important to me that again it was

31:23

in the book and we just talked about is

31:24

having this personal board of directors,

31:26

finding peers who ideally work at

31:27

different companies. You might meet some

31:29

people here. You might already know

31:31

them. Reach out to them. And it sounds

31:32

like it's a time where everyone will be

31:34

happy to get together to have coffee.

31:35

Create a WhatsApp group or or just just

31:37

a group messaging group with a few of

31:38

you and you talk. I'm doing this. I'm

31:40

seeing this. I'm doing this. I'm seeing

31:41

this because it seems like the only

31:43

certain thing is it will change and it

31:44

will depend what works for you. So if

31:46

you get like-minded people in similar

31:48

industries, the it depends will probably

31:50

more similar to a lot of you, right?

31:52

>> Yeah. Um, and I found that to be some of

31:54

the most helpful and the most beneficial

31:56

for me is, you know, can I bounce an

31:58

idea off someone? Is the way I'm

31:59

explaining it making sense? Uh, are the

32:02

things that I'm seeing similar to the

32:04

things that you're saying? And if yes,

32:05

what could that mean? And if no, is it

32:08

actually different? Or are we just using

32:09

different words? Right? And so that I

32:11

think especially when we're in a time of

32:14

change at all, but especially this

32:16

rapid, it's super super helpful. And I

32:18

also just keep I've got like the back

32:20

channel, right? So sometimes it's

32:22

talking to someone and sometimes it's

32:23

just like popping a question at a back

32:24

channel with a handful of folks that you

32:26

know and respect and you feel safe with.

Interactive Summary

A discussion with Nicole Forsgren about her book "Frictionless" and the evolving landscape of developer productivity in the age of AI. The conversation explores the paradox of faster coding versus slow shipping, the importance of the DevX framework involving flow, cognitive load, and feedback loops, and the need for organizational transparency and psychological safety. Nicole emphasizes using the SPACE framework for measurement and the value of a "personal board of directors" to navigate rapid technological change.

Suggested questions

5 ready-made prompts