HomeVideos

Data vs Hype: How Orgs Actually Win with AI - The Pragmatic Summit

Now Playing

Data vs Hype: How Orgs Actually Win with AI - The Pragmatic Summit

Transcript

830 segments

0:04

Today I wanted to have a really [music]

0:06

pragmatic and downto-earth conversation

0:09

about AI, what is actually happening in

0:12

our organizations, what you can expect

0:14

to happen um and how agents are changing

0:16

the game. And I thought in order to have

0:18

this really pragmatic down-to-earth

0:20

conversation, I wanted to take us to

0:23

space.

0:25

I do see a lot of parallels between the

0:27

age of exploration and the space race

0:29

and the age of AI. So last week I was

0:32

talking with a CTO co-founder of a small

0:34

startup. I was also talking with a

0:36

principal engineering lead at a very big

0:38

bank, highly regulated and we sat for a

0:41

solid 15 minutes talking about all the

0:43

cool stuff that we were building, how

0:45

it's brought back joy, the joy of

0:46

coding, and we just had all of these

0:49

ideas and it seems like we couldn't

0:50

build fast enough. There is so much to

0:54

learn and so much to build. And it

0:56

reminds me of this really lovely quote

0:58

from Carl Sean that I love that

1:00

somewhere something incredible is

1:02

waiting to be known. And I think that

1:04

this quote really captures what a lot of

1:06

us are feeling about AI and about the

1:09

experimentation and just the possibility

1:10

that is out there.

1:14

This is the same feeling that we had

1:15

with the age of space exploration, going

1:17

to the moon, going to Mars. But it

1:20

didn't come without skepticism.

1:22

Why spend all of this money

1:24

experimenting and going to the moon when

1:26

we had lots of problems to solve here on

1:28

Earth?

1:30

We had a lot of wonder, but we also had

1:32

a lot of skepticism because space

1:34

exploration wasn't just about science.

1:36

It was also global. It was economic. It

1:38

was political.

1:40

Space wasn't a silver bullet to solve

1:42

all of the problems that we had with

1:43

humanity. But we also can't deny that

1:46

when a man landed on the moon that it

1:48

was a very pivotal defining moment for

1:51

all of humankind and had a sense of

1:53

wonder and had the world in awe. It was

1:55

about redefining what was possible.

1:59

And similarly, we have a lot of wonder

2:01

and a lot of optimism and a lot of

2:03

promise about AI. We can talk about

2:05

productivity boosts and all of the hype

2:07

around, you know, 100% productivity

2:09

boost and all of our code being written

2:11

by AI. We have the promise of perhaps

2:13

the first singleperson billion-dollar

2:15

startup with a person with an idea and

2:18

an army of agents. There's a lot of

2:20

optimism out there. Similarly, there's a

2:23

lot of skepticism. There's a lot of

2:25

skepticism in the corporate world about

2:27

the real economic impact of AI given how

2:30

expensive it is, given the environmental

2:31

impact. There's also a lot of skepticism

2:34

in a lot of different studies about the

2:35

real productivity impact. In certain

2:38

circumstances, it can be really um it

2:40

can really accelerate and in other

2:42

circumstances it can actually slow us

2:44

down and get in our way. It's hard to

2:45

know what's real.

2:48

But as technology changes, it is really

2:50

good and fine to have that sense of

2:52

wonder of exploring the universe while

2:55

also realizing that we have problems

2:56

here on Earth to solve. We have to learn

2:58

how to balance that sense of wonder and

3:00

curiosity with the acknowledgment that

3:04

we are living in reality and we need to

3:06

keep our our feet firmly planted on this

3:08

earth. We need to understand how these

3:10

experiments are actually going to apply

3:11

to everyday companies. How are we

3:13

actually going to improve the world

3:14

around us? We need to keep the sense of

3:17

wonder while also balancing it with

3:19

pragmatism and beating the hype by

3:22

looking at data. And so that's what I

3:24

want to do right now. I'm going to share

3:26

some brand new AI industry benchmarks

3:28

with you. This is new data that no one

3:30

has ever seen ever before in the world.

3:32

Okay, it's coming right now to you. Um,

3:34

I just pulled these down. I guess the

3:36

the static team has seen them because

3:37

they they saw the preview of my slides,

3:39

but aside from them, no one has ever

3:40

seen it. Um, this is though not really

3:43

surprising because a lot of these

3:44

numbers have not changed very much from

3:47

the last quarter. So, what we're looking

3:48

at here is a sample of 121,000

3:50

developers at over 450 companies. This

3:53

data was pulled from November through

3:55

February 1st, 2026. I I really just did

3:58

this. Um, we're sitting around 92.6 of

4:01

developers are using an AI coding

4:03

assistant at least once a month to get

4:04

their work done. And about 75% of

4:06

developers are using an AI coding

4:08

assistant at least once a week. When I

4:11

say a AI coding assistant, most

4:13

developers define that as cursor, Codex,

4:16

Copi, you know, Claude, not ChatgPT

4:18

necessarily. Um, but it is a bit

4:20

open-ended, so keep that in mind.

4:24

When it comes to time savings, time

4:26

savings is not the only measure of

4:29

productivity impact, but it is an

4:31

important signal. It's a good leading

4:32

indicator. We're sitting around 4.08

4:36

uh self-reported hours saved due to AI

4:38

tool usage per week per developer. This

4:41

is not all too different from the number

4:43

that came in Q2 of 2025. And then the

4:47

number for Q4 of 2025 was about 3 uh six

4:51

or seven. So this is kind of hovering

4:54

around the 4 hour mark. And there's been

4:56

a few articles, for example, from Google

4:57

in the last year citing about a 10%

5:00

productivity increase. And if we look at

5:01

it in terms of time savings, we're kind

5:03

of hovering around that 10% mark. It

5:06

hasn't changed dramatically um over the

5:09

last few quarters.

5:11

What is changing and what is moving up

5:14

very quickly is the amount of code

5:17

getting merged upstream or in a

5:19

customer-f facing environment that was

5:21

written by AI that was merged without

5:23

significant human intervention. We call

5:26

that AI authored code. And in a sample

5:28

of around 42,600

5:30

developers from that same time frame,

5:33

uh, November 1st to February 1st, 2026,

5:36

we're at about 26.9% industrywide for

5:40

all of these developers. That's how much

5:41

code is hitting production that was AI

5:43

authored.

5:45

This is moving up from 22% in the last

5:47

quarter, which is actually a pretty

5:49

significant change quarter over quarter.

5:51

And we can see that daily users of AI

5:53

have crested over that 30% mark. So

5:55

almost a third of their code is being

5:57

written by AI that is actually being

5:59

merged, passing through code review and

6:01

getting into a customer-f facing

6:03

environment.

6:06

One of my favorite use cases for

6:08

applying AI is to onboarding. And I had

6:10

a bit of a hunch that AI was going to be

6:12

a great tool for onboarding, helping

6:14

connect people with information earlier

6:17

and sooner. And I have all of this data

6:20

and I thought, let me look at this

6:21

quarter over quarter. And in fact, if we

6:24

look at Q1 of 2024 all the way over here

6:26

on the left side and fast forward to Q4

6:28

of 2025, we have about a half uh we have

6:31

a half reduction in onboarding time.

6:34

This is looking at the time to 10th PR.

6:36

So by the time a developer hits their

6:38

10th PR, that's a pretty important

6:39

onboarding milestone that the industry

6:41

has mostly aligned on in terms of

6:43

onboarding and that has been cut in half

6:47

now. And when we correlate that with the

6:49

uptick of AI usage, it makes a really

6:51

pretty graph. Uh AI is fantastic for

6:55

onboarding. And this is not just brand

6:57

new hires to your company. We've also

6:59

seen plenty of evidence that this is for

7:02

engineers who are moving projects or

7:03

even non-engineers coming onboarding

7:06

into projects. What's really important

7:08

about this number is that there was a

7:10

separate study done um by Brian Hulcat

7:12

at Microsoft. He's the co-author of the

7:14

space framework of developer

7:15

productivity and they found in

7:17

Microsoft's context that the time to

7:19

10th PR actually that performance sticks

7:21

with an engineer for their first two

7:23

years of tenure. So if you onboard

7:25

faster that productivity gain isn't just

7:27

onboarding it actually sticks with them

7:29

for at least two years after they have

7:31

started at the company. So this is a

7:33

very important and significant uh trend

7:36

that we're seeing here with using AI to

7:38

connect developers, reduce cognitive

7:40

load, and get them onboarded more

7:42

quickly into their code bases.

7:45

One thing that's really important for me

7:47

to call out, although I have just shared

7:48

with you industry benchmarks, averages

7:51

are just math. And as the polls move

7:54

further away from each other, the

7:55

average stays the same. Average does not

7:57

mean typical. It does not mean what is

7:59

going to happen to you. And it doesn't

8:01

mean what a common experience is. One

8:04

thing that is absolutely true, one thing

8:06

that is common is that there is no

8:07

typical experience with AI.

8:10

There is no typical experience with AI.

8:12

It is it is extremely different in every

8:15

single company because every company has

8:17

their own problems and their own

8:18

culture.

8:20

This uneven impact can take us back to

8:23

space for just a minute.

8:26

So we can go back to the origins of the

8:28

universe. We had the big bang and there

8:30

was this massive release of energy and

8:32

as this energy released the time the the

8:34

space and time in between objects grows

8:36

bigger right things are moving apart and

8:39

for a lot of us the emergence of AI and

8:42

AI coming into our organizations and in

8:44

the industry has felt a lot like this

8:46

big bang we've had this explosive

8:48

release of energy in the center of our

8:49

world and things keep moving apart

8:52

organizational performance is

8:54

multi-dimensional and these

8:55

organizations are just going off into

8:57

different extremes teams based on what

8:59

they were doing before. AI is an

9:01

accelerator. It's a multiplier and it is

9:03

moving organizations off in different

9:05

directions. The best example I can share

9:08

with you of this is quality. Okay, so in

9:13

this case, this is not every

9:15

organization, but some organizations are

9:18

facing twice as many customer-f facing

9:20

incidents and this is from a sample of

9:22

over 67,000 developers from Q1. So that

9:25

same time frame of uh November to

9:27

February. So just looking in that time

9:28

frame organizations are experiencing

9:30

twice as many customerf facing incidents

9:33

at the same time at the same time

9:37

companies are also experiencing 50%

9:39

fewer incidents. So some companies have

9:42

used AI they have a really healthy

9:44

system it has amplified that system they

9:46

are seeing fewer incidents they're

9:48

moving faster they are accelerating with

9:49

higher quality higher code

9:51

maintainability higher change

9:52

confidence. On the other side though,

9:54

blasting off into the other part of the

9:56

universe, we have organizations who were

9:58

dysfunctional already. No, they're more

10:00

dysfunctional. They're dysfunctional and

10:02

dysfunctional faster.

10:05

Okay.

10:07

Similarly to this uneven impact,

10:10

organizations are seeing really uneven

10:12

results like economically from using AI.

10:16

there are a lot of steep drop off drop

10:18

offs when it comes to using AI in a

10:19

pilot context to production and then

10:21

actually trying to tie it to profit.

10:24

This is from uh an MIT study that was

10:26

published in July of 2025 called the Gen

10:29

AI divide. And what the study concluded

10:32

they did a survey of 152 organizations

10:34

was that right now where we are in the

10:36

industry is that we have really high

10:37

adoption, right? That 92.6 number. Um

10:40

DORA also does its own research. We're

10:42

hovering around that 90% adoption

10:43

number. high adoption but actually low

10:46

transformation because as it turns out

10:49

transformation is really uncomfortable

10:51

and organizations that were ready to

10:53

give up on the cloud transformation on

10:56

the agile transformation are also giving

10:58

up on their AI transformations. It is

11:01

really really difficult to look at your

11:04

whole organization and look at the

11:05

problems and think we got to change

11:08

something about this and that is what

11:10

organizations need to do in order to

11:11

actually see change to their bottom

11:14

line.

11:16

All of this to say back to my previous

11:18

point we have 92.6 adop uh percent

11:20

adoption among developers in our

11:22

industry but adoption doesn't mean

11:24

impact. Using the tool doesn't mean that

11:26

it's going to actually advance your

11:28

organization or do anything. It is an

11:31

organizational problem that needs

11:32

organizational change management. But

11:35

that's not really what we were promised

11:36

with all of the hype was like, hey,

11:38

experiment with AI and then something

11:39

happens and then we profit.

11:42

What happens though is that these tools

11:44

were primarily deployed into individual

11:47

coding tasks. And what this MIT study

11:50

found in this high adoption low

11:51

transformation is that when we apply it

11:53

only to the surface area of a developer

11:55

sitting at their desk there is a very

11:58

very low ceiling of productivity gain.

12:01

This is an organizational problem. If we

12:03

want organizational results we have to

12:04

think about it on an organizational

12:06

level not on a coding task level.

12:10

Fortunately, our universe is expanding

12:11

right now

12:13

and that expanding is coming through the

12:16

use of agents in agentic workflows.

12:19

Our universe is getting bigger and so

12:21

are all of the promises and all of the

12:24

hype, but so is the possibility.

12:28

So, let's go back to the moon landing,

12:30

right? Like the ultimate hype was that

12:32

we're all going to be living on the moon

12:33

by now in flying cars like jetson style.

12:37

Um, similarly here we have a little bit

12:40

of like crazy ideas. Um, Gas Town, if

12:42

any of you have used it, um, there

12:44

there's just like there's so much crazy

12:46

stuff to do right now. Um, Gas Town is

12:48

infinitely interesting to me. There are

12:51

so many interesting things. Um,

12:52

disclaimer, don't use Gas Town. It is

12:54

unhinged. Um, we've got OpenClaw,

12:58

Maltbot, Clawbot, whatever it's called.

12:59

We've got Ralph loops. We've got all the

13:01

stuff, right? There is so much

13:02

experimentation and so much fun. It's

13:05

just really fun to build. Um, but me

13:08

building my nail polish matching like

13:11

color scheme app while I'm sitting at

13:12

the nail salon is not the same as a

13:15

multinational bank being able to change

13:17

their revenue because of AI. Those are

13:19

really different things. Um, and I was

13:22

at this retreat with Martin Fowler and

13:25

Kent who are I think back there. Hello.

13:26

We'll talk about that a bit more later.

13:28

We spent a lot of time trying to connect

13:30

AI and the use of AI to bottom line, to

13:33

profit, to P&L. And interestingly, kind

13:35

of where we landed at the end was this

13:37

question of like what is the value of

13:39

innovation? Was it still valuable to go

13:41

to the moon even though I'm not really

13:43

located on the moon right now? And I

13:45

would argue that yes, it is valuable to

13:48

innovate. And that can get into some

13:50

murky area because this is a business,

13:52

right? This isn't just society and and

13:55

doing things for the good of humankind.

13:56

we have to do them in an economic

13:58

context and that can get a little bit

13:59

tricky.

14:01

So when we think about this quote

14:03

something uh somewhere something

14:04

incredible is waiting to be known. There

14:07

is a sense of wonder and AI and space

14:09

are both the age of exploration and it

14:12

is so exciting.

14:15

But the point of going to the moon

14:17

wasn't that we all need to live on the

14:19

moon. In fact, the point of going to the

14:21

moon and the point of exploring and

14:22

doing all this crazy stuff was to

14:24

improve life on Earth. It was to use the

14:27

space exploration and all of this wonder

14:29

to apply it to the systems level

14:30

problems that we had back on Earth. Not

14:33

everyone wants to live on the moon. Um,

14:35

but we have sunglasses, we have space

14:37

blankets, we have barcodes, we have

14:39

quartz watches. We have so much

14:41

technology and so many improvements back

14:44

on Earth because of this crazy age of

14:46

exploration where we all went to space.

14:49

Even though we're not living on the

14:50

moon, we've still used the lessons and

14:52

applied it to our systems back here on

14:54

Earth.

14:55

And so thinking about agentic workflows,

14:57

agents expand the possibilities of what

15:00

we can build, how we can build it, and

15:02

who we can build it for. Not everyone

15:04

goes to the moon, and it's okay not to

15:06

go to the moon. Not everyone is going to

15:08

be building crazy stuff with Gas Town

15:11

every day in in your enterprise context.

15:14

And that's also okay because the

15:15

experimentation helps push the boundary

15:18

of what's possible and helps us think

15:19

about solving problems in new ways.

15:22

So, let's talk a little bit about how

15:24

agents are being used in the industry

15:25

right now. Again, this is new data that

15:27

I'm sharing for the first time here. Um,

15:29

agentic use is on the rise. There's not

15:31

a lot of companies, honestly, that are

15:33

so far ahead of the curve that they're

15:34

already uh instrumenting their agentic

15:37

use cases with really good telemetry.

15:38

This sample is a little bit smaller.

15:40

It's around 3,000 developers at six

15:42

companies. Keep in mind, these companies

15:44

are ahead of the curve. They're already

15:45

instrumenting their agentic workflows

15:47

with telemetry. Um, we have about 80% of

15:51

developers using these agentic workflows

15:53

at least once a week with over 50% using

15:56

agentic workflows every single day to

15:58

get their work done.

16:00

We talked about codecs I think in the

16:02

previous panel. So on February 2nd, the

16:05

Codeex desktop app was released and

16:07

since then there's been a million over a

16:09

million downloads by now. I got this

16:10

data yesterday. I'm sure it's quite

16:12

different by now. There's been a 60%

16:14

growth in users just in the last week.

16:17

Um they also launched uh GPT 5.3 codecs

16:21

uh last Thursday. They're processing

16:23

trillions of tokens per week. Internally

16:27

at OpenAI, 95% of developers are using

16:29

codecs to ship stuff. And of the

16:33

developers who are using codecs versus

16:35

other AI tools, the developers who use

16:37

codecs are shipping about 60% more PRs

16:40

per week, which is very interesting. a

16:42

data point, not the only data point, but

16:44

it just speaks to the very high ceiling,

16:47

the high possibility, the sense of

16:48

wonder that we have with building all of

16:50

the stuff with cool new tools like

16:52

Agentic Workflows.

16:55

I want to bring it back to a nonAI

16:58

startup though. Um, I want to highlight

16:59

Haven Headache and Migraine Center. So,

17:01

this is a company that's based here in

17:03

San Francisco, actually just a few

17:04

blocks away. Haven set out to answer the

17:07

question, can we solve headaches with

17:09

Zoom? And it turns out you can. Um, so

17:12

if you're a headache sufferer, this

17:13

might be useful for you to to learn

17:14

about. In healthcare, it's really really

17:17

uh crucial for Haven and their

17:19

development team to distinguish between

17:21

using agents for durable code or

17:23

disposable code. One of the things that

17:25

they're doing that's very cool since

17:26

they are a disruptor, they are a small

17:28

startup is using um using agentic

17:30

workflows to rapidly prototype new

17:32

custom uh like new patient workflows. So

17:34

they're working on a patient portal

17:36

building with Ralph loops taking uh

17:38

linear and Figma artifacts changing it

17:40

into a PRD uh you know spitting that out

17:43

in JSON and then just having Ralph loops

17:45

run. What they're getting though isn't

17:47

garbage disposable AI slop. What they're

17:49

getting is really high quality

17:51

prototypes with really excellent

17:53

documentation, excellent tests, much

17:55

higher quality at a way faster rate than

17:58

they would have um if they would have

17:59

built it by hand the oldfashioned way.

18:03

The other thing that they're doing that

18:04

I really admire is improving the

18:06

standard of care for their patients by

18:08

training a HIPACO compliant model on

18:09

hundreds of thousands of symptom logs.

18:12

So Haven meets you where you're at. You

18:13

get a text message, you can log your

18:15

symptoms and then they can um instrument

18:17

your care, figure out what needs to

18:18

happen from there. So they're training a

18:20

hypocmplant model on hundreds of

18:21

thousands of these messages so that

18:23

those messages can be routed to, you

18:25

know, medication refill or schedule

18:27

follow-up appointment just meets you

18:28

where you are. And the result of this is

18:30

that they have 3x the industry average

18:33

in customer satisfaction for a

18:34

healthcare tool like this, but also real

18:37

real meaningful clinical outcomes. So

18:39

their patients have fewer headache days

18:41

per month and also the severity of their

18:43

headaches is much uh much less severe.

18:46

So good job Haven. In the enterprise,

18:49

there are lots of examples of big

18:51

enterprise companies experimenting with

18:52

agent workflows. So there's an

18:54

enterprise manufacturing company that's

18:56

using it for solely internal developer

18:58

purposes. They used C-pilot and Claude

19:01

to build out a dev portal to accelerate

19:03

uh developer onboarding. At Cisco,

19:06

there's 18,000 engineers using codecs

19:09

daily. They're using the uh codecs for

19:11

complex migrations and also code review

19:14

leading to a 50% reduction in the amount

19:15

of time it takes to do code review.

19:18

There's a really cool paper as well by

19:19

JP Morgan Chase's multi-agent framework

19:22

for annotation, MAFA. If you Google

19:24

that, you can find the source paper.

19:25

It's really fascinating. Um, what

19:27

they're doing is building out like a

19:29

whole business of agents. So like a true

19:31

multi- aent workflow similar to Gastown

19:33

where each agent has a special job to

19:35

do. What they're also doing in this

19:37

model is introducing consensus among the

19:39

agents. So they're taking all of these

19:42

interactions and then they're annotating

19:43

them. this was you know the intent what

19:45

was it an FAQ what what were all these

19:48

interactions the agents are annotating

19:50

them and then there's another set of

19:52

agents who are responsible for reranking

19:54

and calibrating and validating the

19:56

output and then of course we have to

19:58

introduce consensus algorithms to the

20:00

party because now we have multiple

20:02

agents with maybe multiple different

20:04

opinions about things um this is really

20:06

fascinating and I believe consensus

20:08

among agents is going to be a huge

20:09

problem to solve in 2026

20:14

I spoke about this retreat. I was lucky

20:16

enough to be invited by Martin Fowler

20:18

and thoughtworks to the future of

20:19

software development retreat celebrating

20:21

the 25th anniversary of the agile

20:23

manifesto of Gerge joined me. A few

20:26

other folks who are here also joined me.

20:28

We spent a day and a half up in the

20:30

mountains talking about agents. That's

20:33

really all we talked about about using

20:34

agents responsibly, ethically,

20:36

sustainably, how we can use them for

20:38

organizations. And our conclusion even

20:40

though there was so much interesting

20:42

stuff, Steve Yaggi was there, we were

20:44

working on Gas Town things, like there

20:46

was a lot of experimentation happening,

20:48

but the conclusion that we came to was

20:50

that AI does not solve organizational

20:52

systems problems. It only can do that

20:54

when you apply AI to the system problem,

20:56

which means you need to acknowledge that

20:58

the system problem exists in the first

21:00

place. AI is not a magic silver bullet.

21:02

Even though things like Gastown exist,

21:04

even though there is so much sense of

21:06

curiosity and wonder in the universe,

21:09

um we kind of had a sort of off-the cuff

21:12

conversation. Uh Kent Beck, Steve and I

21:14

were just catching up um outside uh of

21:17

one of the sessions in between

21:18

conversations. And here's sort of where

21:20

we summarized our thoughts.

21:21

Organizations are constrained by human

21:23

and systems level problems. We remain

21:26

skeptical of the promise of any

21:28

technology to improve organizational

21:30

performance without first addressing

21:32

those human and systems level

21:33

constraints. We remain skeptical and we

21:36

also remain human because the risk is if

21:39

we don't address the systems level

21:41

problems, we will just take them to

21:43

space with us. We will just take them to

21:46

space with us. We're not actually going

21:47

to solve the human factors that are the

21:50

driving force behind all of the

21:52

constraints that organizations have

21:53

right now. We can apply AI to those

21:55

problems, but we still need to solve

21:57

them. We can't just go to the moon and

21:59

expect that pollution and garbage and

22:01

traffic aren't going to be a problem

22:02

anymore.

22:05

And so the question is not how to

22:06

colonize Mars, but the question is how

22:08

to get real organizational impact with

22:10

agents and AI. At this retreat, we also

22:13

talked a lot about common factors that

22:15

we see. What do we see organizations

22:17

doing? What are the common patterns that

22:19

is kind of like the secret to to

22:21

winning? What do they have in common?

22:23

The first one is that organizations who

22:25

win with AI and are winning with AI have

22:28

goals and they measure their progress

22:30

against those goals. Spray and prey does

22:33

not work. Spray and prey, what I mean by

22:36

that is just giving all of your

22:38

developers licenses and hoping for the

22:40

best. It does not work. I can say that

22:43

very very clearly. I have a lot of

22:44

evidence that does not work. If you can

22:47

point AI innovation and that

22:49

experimentation to a problem, have a

22:51

concrete goal and then measure if you're

22:53

reaching that goal, that is what winning

22:55

organizations are doing right now.

22:57

Because as Spock has told us,

22:59

insufficient facts always invite danger.

23:01

We need to measure things. We need to

23:03

have data. And I know this is something

23:05

that's really difficult for a lot of

23:06

organizations right now because

23:08

developer productivity and engineering

23:10

excellence are also really hard

23:12

problems. And this is happening all at

23:13

the intersection. So I have something

23:15

that can help you if that is in uh a

23:18

problem that you're facing in your

23:19

organization. This is the AI measurement

23:21

framework. This is a framework that I

23:22

co-authored with Abby notto who's the

23:24

CEO of DX.

23:26

This complements our core four framework

23:28

which some of you might have heard

23:29

otherwise it's in the impact column

23:30

here. What we're looking to do is track

23:32

not just usage and adoption and

23:34

utilization of AI but then also

23:36

translate that into real organizational

23:37

impact. Is this changing your speed,

23:41

your developer experience, your quality,

23:43

your innovation ratio? Those are really

23:45

important questions to connect the

23:47

adoption to impact. Finally, we have to

23:50

look at the cost. Are we getting a good

23:52

deal?

23:54

Maybe some of us are for now. Um, and we

23:57

need to understand as the cost of these

23:58

tools keeps going up and up, is the

24:01

investment the right one?

24:04

The second thing that is helping

24:06

organizations win is that developer

24:09

experience matters now more than ever.

24:12

Here is a piece of very unconventional

24:14

advice that I'll give you is just

24:15

anything that you were going to talk

24:16

about with your leadership team about

24:18

developer experience. Just call it agent

24:20

experience and you'll get money for it.

24:24

It's funny but it works. Um, it works

24:27

because developer experience, feedback

24:29

loops, um, you know, clearly defined

24:31

services, great documentation, fast CI,

24:35

these are all things that we have been

24:36

screaming about for decades, literally,

24:39

and we've been begging for pennies from

24:41

our organizations to please let us

24:43

invest, please let us invest in

24:44

developer experience. And we've been

24:46

told no over and over again. Come to

24:48

find out, in fact, these are the things

24:51

that make AI really successful. We need

24:52

to have really solid testing and quality

24:55

practices. We need to have great

24:56

documentation. These are critical for

24:58

agentic workflows. It is disheartening

25:01

that we didn't want to spend the money

25:02

when it came to human engineers, but

25:04

when it comes to robot engineers, we're

25:06

okay with it. But that is the world that

25:08

we live in and let's capitalize on our

25:09

opportunity. So Devex matters more than

25:12

ever. In fact, when we look at the data

25:13

right now, remember we're hovering

25:15

around that 4hour mark for time savings.

25:18

when we look at all of the other factors

25:20

of developer experience like AI time

25:22

savings is not going to make up for bad

25:24

meeting like bad meeting culture and

25:26

lots of interruptions and um you know

25:29

developers who are constantly being

25:31

pulled out of their work unplanned work

25:33

interruptions outages those kinds of

25:34

things AI will not make up for that we

25:36

can use AI to help solve that problem

25:38

but AI in and of itself is not going to

25:40

make up for it then when we look kind of

25:43

in the bottom half build and test wait

25:45

time toil and dev environment we put all

25:47

that together We realize that just the

25:49

time savings from coding task speed up

25:51

isn't going to get us very far. But what

25:53

will get us far is when we can take AI

25:56

and point it at those problems. Can we

25:59

use AI to help reduce meeting frequency?

26:01

Can we use AI to improve CI weight time?

26:04

Can we use AI to reduce dev environment

26:06

toil? That is what winning organizations

26:08

are doing right now. They are putting

26:10

DevX at the center of their universe and

26:12

seeing AI as a tool to fix systems level

26:15

problems.

26:18

They're doing it also on an

26:19

organizational level. If you want

26:21

organizational outcomes like revenue,

26:25

P&L, time to market, you have to think

26:28

about AI on an as an organizational

26:30

problem, not as an individual problem

26:32

that your developer needs to solve at

26:33

their desk. It has to apply to workflows

26:36

that span entire value streams.

26:39

back to that MIT study when we looked at

26:41

the barriers to organizational adoption

26:43

uh or the organizational barriers to AI

26:45

adoption, they weren't technical. This

26:48

wasn't about the models necessarily. It

26:50

wasn't even about the tools that wrap

26:52

the models. It was about things like

26:54

change management or lack of executive

26:56

sponsorship when you have an executive

26:58

team saying go with AI, but they

27:00

themselves have never cracked their

27:01

laptop open and fired up windsurf or

27:04

cloud code or codeex. um poor user

27:07

experience, just very unclear

27:09

expectations about AI. Those are the

27:11

things that get in the way.

27:13

If this sounds familiar to you and

27:15

perhaps your organization could do a

27:17

better job, there's two things that I

27:18

want to point you to. Um the first one

27:20

is the Dora AI capabilities model. These

27:22

are models that kind of communicate and

27:24

help you get ready for AI. So think

27:27

about this as an AI readiness model or

27:28

an AI capabilities model. This has in a

27:31

crazy amount of data from organizations

27:33

that Dora studies. They do a lot more

27:34

than just the four key door metrics. Um,

27:38

finding correlations between practices

27:40

that organizations have and good

27:42

outcomes with AI. So, if you use AI and

27:44

have a good clear and communicated AI

27:46

stance, you are going to do better

27:48

organizationally than a company that

27:50

does not have one. You can find this at

27:51

dora.dev. It's the Dora AI capabilities

27:53

model. There was just a new paper that

27:55

came out last month. Last month, uh,

27:58

Nathan is here who leads Dora over at

28:00

Google Cloud. If you want to talk to him

28:01

about this, he's probably the the guy.

28:03

Um the other one is the thoughtworks

28:05

forest framework. This is similar to the

28:07

AI capabilities model kind of a

28:08

different flavor on it. Um if you go to

28:10

thoughtworks.com and look in their white

28:11

papers you can read through this but

28:13

these are both really solid

28:14

wellressearched industrybacked AI

28:16

readiness models to help convince your

28:19

leadership team if you need that um or

28:21

just help you do an internal audit of

28:22

are we doing the right things to make

28:24

ourselves ready to you know reap the

28:27

benefits of all this experimentation.

28:31

The last thing is that organizations who

28:33

are doing really well with AI right now

28:35

are experimenting by solving real

28:37

customer problems. Again, space

28:39

exploration and going to Mars is great,

28:41

but that is not sustainable for your

28:43

whole entire organization to be

28:44

experimenting with going to Mars. It

28:47

just costs too much money. It distracts

28:48

too much from the core business problem.

28:50

It does not serve your customers. So,

28:52

keep experimentation going. Other

28:55

experimentation can be really laser

28:57

focused on real customer problems that

28:58

you have. And that is how you're going

28:59

to see the organizational results.

29:03

Somewhere something incredible is

29:05

waiting to be known.

29:07

There is so much possibility of how we

29:09

can build what we can build, who we can

29:11

build it for right now with AI and

29:12

agents are just accelerating this. They

29:14

are expanding our universe.

29:17

We are definitely in an age of

29:19

exploration. The urge uh the thing I

29:22

want to urge all of you to take with you

29:23

into the rest of the sessions today is

29:25

to find that balance between a sense of

29:27

wonder and a sense of awe and aiming for

29:30

Mars and aiming for your moon colony but

29:32

also understanding that we need to solve

29:33

the problems here on Earth and we have

29:35

to live in this reality. So please stay

29:37

grounded, stay skeptical, stay human,

29:40

most of all stay pragmat stay pragmatic.

29:43

Thank you all. [music]

29:44

[applause]

Interactive Summary

The video discusses the current state of AI in organizations, drawing parallels with the age of space exploration to highlight both the wonder and skepticism surrounding new technologies. It presents new industry benchmarks, revealing high AI coding assistant adoption among developers (92.6% monthly, 75% weekly) and a significant increase in AI-authored code reaching production (26.9%). AI has dramatically reduced onboarding time, but its impact is uneven across organizations, accelerating both functional and dysfunctional behaviors. The speaker emphasizes that despite high adoption, true organizational transformation with AI is low due to systemic, not technical, barriers. To achieve real impact, organizations must set clear goals, measure progress, prioritize developer experience (DevX) by applying AI to system-level problems, and focus experimentation on solving real customer needs, balancing technological wonder with pragmatism.

Suggested questions

10 ready-made prompts