What 6 months of AI coding did to my dev team
378 segments
If you're building software in your
business, your dev team is changing
faster than you realize. Not the people,
but the work itself. I'm running a 20
person software team at Wu. And over the
last 6 months, I've watched something
strange happen. The bottleneck in
software development has changed. It's
not where it used to be. It's moved. And
if you're hiring developers right now or
you're trying to figure out why your
team isn't shipping faster, you need to
watch this.
Here's how software development used to
work. You hired a team of developers to
write code. You measured their output in
lines committed in tickets closed in
features shipped. The craft was in the
code itself. When I started building Wu,
that's exactly how we operated. We hired
people who could turn tickets into work
in features. Code review was the quality
gate. If it passed code review, it
shipped. But something fundamental has
shifted. We started using AI coding
tools, clawed code, cursor, and the
entire rhythm broke. The code started
arriving faster than we could process
it. So the job actually changed
three months ago. One of our senior
engineers came to me visibly frustrated.
He'd spent like 3 days reviewing pull
requests from an engineer, a junior
engineer that had used clawed code.
There was like thousands of lines of
code, but the application worked. But he
looked at me and he said, "I didn't
actually read all the code. I couldn't
read all the code. There was too much of
it. What do I do now?" And that question
made me think cuz I'd been feeling the
same thing, but I just couldn't name it.
Around the same time, I came across some
findings from a retreat called Thought
Works, which is basically senior
engineers from the world's biggest tech
companies brought together to find out
what happens when AI writes the code.
And they didn't leave with answers from
this retreat. They're left with a map of
fault lines, places where traditional
software development is cracking right
now. And reading through it felt like
reading my own history from building Wu.
You've got the cheating agent problem
where AI writes broken code and then it
writes broken tests to validate the
broken code. You got the productivity
experience paradox where your developers
are more productive but more miserable.
I actually seen this in our team. The
migration of code review from code
review to specifications. We' actually
started writing more more stricter
specification dogs without even
realizing why. And now we know what I'm
seeing and what we're struggling with in
our in our own teams. looks like it's
something that's happening also in the
biggest tech companies in the world. The
work is migrating. The skills that
matter are changing. And if you're
hiring developers or you're managing a
tech team and you don't see where the
work is going, you'll end up with the
wrong people doing the wrong things. So,
let me show you what's actually
happening.
Here's what nobody tells you about AI
writing code. The engineering quality
doesn't vanish. It just moves upstream.
Think about a normal user story. I want
to upload a photo. Your developers will
know what that means. you know, JPEG or
PNG uploaded to a site and you got a
progress bar, you know, cuz cultural
context fills in the gaps. But an AI
doesn't have that context. You need to
be really specific. I read a story
recently of a developer asking an AI to
write a notification system of some
kind. You know, simple request, it
worked beautifully in testing when it
was built and then it went into
production and started sending like
50,000 emails in a few minutes. Turns
out that there was no rate limiting set
up in in the specs. See, the engineering
rigor that we used to apply after the
code was written now needs to apply
before in the specs, before a single
line of code has been written. We've
gone back to techniques that felt dead.
Now we need to go back to structured
requirements, state machines, decision
tables, extremely detailed PRDs. It's
the kind of formal documentation that
agile was supposed to kill. But here's
the thing, all that documentation makes
AI incredibly effective at writing code.
When we feed an agent a state machine
that displays and shows exactly what
states are possible within the
application, the code it generates is
almost always correct. It's crazy
because the specification became the
product. The code is dispensable. Think
about it. If you've got a perfect test
suite and you decide to rewrite your
backend from NodeJS to Rust, all you got
to do is ask it. You just feed the tests
into the agent and you say, "Do the
rewrite from Node.js to Rust and make
sure that these tests pass." and the AI
will get to work and make sure that it
tests itself on the work using the
tests. So then the output will always
work. This is a complete inversion. So
if you're hiring developers right now,
the question is isn't can they write
clean code. The question is can they
write a specification clean enough that
an AI can't misinterpret it. Can they
write a set of tests for a test suite
that catches hallucinations before
production? You see, those are different
skills and most developers don't have
them yet.
There is a layer of work in my team that
doesn't quite have a name yet. It sits
between writing code and shipping to
production. I call it supervisory work.
Basically breaking down problems into
agentsized chunks. Knowing when to let
the agent run and when to step in,
fixing the output not by actually
rewriting the code, but by rewriting the
prompt. Here's what surprised me. My
team is currently split into two groups
primarily. Group one is the more senior
people who understand the whole system
architecturally and they're drowning
because they're spending the majority of
their time doing code reviews. And then
you got group two, the more junior ones
that are spending their time writing
code using clawed code and other AI
tools at 10x the speed that they were
doing it before. Basically generating a
lot more code. But that code doesn't
ship itself. It needs architectural
review. It needs to fit into our
structure. It basically needs check-in
before it can be deployed. So the most
senior engineers we have have become
traffic controllers too busy reviewing
AI code and other people's code to
actually build anything themselves. The
more junior developers, they're
thriving. No muscle memory telling them
to write code in a specific way. So
they're using AI tools like a teammate,
not a threat to their identity. In the
early days used to hire a junior and
they used to take 6 months or so of
draining the rest of the team for that
junior to become productive. Now a
junior can get they can be writing
useful code into production within a
week. But I think there's this this
there's this danger level of midlevel
developers. The guys that have a few
years of experience, they're used to
writing code in the way that they
normally write code before AI existed.
and retraining them to use AI in an
effective way is extremely difficult
because they need to change their
mindset around instead of focusing on
the syntax and the code that they're
writing around a detailed implementation
request on how they talk to the model to
achieve the result that they need. So
here's what I'm learning as a CEO hiring
developers and running a development
team. The job description has changed.
If you're looking for people that can
write code fast, you're looking at the
wrong skill. You need to look for people
that can architect systems, write
unambiguous specs, and supervise AI
agents. And that's a completely
different person to the old school
developer that we hired a few years ago.
Let me tell you a story. Last month at
around 2:00 a.m., one of our servers
broke. It was spitting out error 503.
Service unavailable. our own call
engineer at the time, you know, a guy
pretty sharp and uh really really
capable. He put this into AI to see what
he needed to do. The AI tool looked at
the error, read the documentation and
said restart the server. So our engineer
restarted the server. Uh and then a few
minutes later after restarting it
crashed again. So then he repeated the
process and AI said restart the server.
So he restarted the server again and
then he repeated the process again and
AI said restart the server. By the time
he'd escalated to a senior engineer,
he'd restarted the server six times. The
senior engineer looked at the logs for
about 30 seconds and knew exactly what
the problem was. Turns out the database
connection pool was full because of some
batch cron job that was running in the
background. You see, that's not
documented anywhere. That's tribal
knowledge. That's lived experience. And
an AI doesn't have that, or at least
ours didn't. It sees 503, it reads the
manual, restart the server. Typically,
that's what you would do. But without
that other knowledge, without the other
bit of information, you're just in that
cycle of restart the server, it goes up,
it crashes, restart the server, it comes
up, and it crashes again. This is why I
think all this hype about self-healing
systems is rubbish right now. Unless you
really got all the knowledge in the AIS
context. All the knowledge that a human
would have, all the knowledge that a
senior human would have. To make an AI
agent effective during an outage, you
need to build what the Thought Works
Retreat called an agent subconscious.
basically a knowledge graph of every
incident, every weird edge case, every
bit of undocumented institutional
knowledge that lives in your senior
engineer's heads. We're starting to
build this at Wu. Every time something
breaks, we document not just what
happened, but how we fixed it and what
would be in a senior's engineer's head
when fixing it. You know, that bit of
information that is just known by the
human. We document that. But then
there's a valid point here is that
there's another problem and that's that
AI agents are trained to be helpful.
They are yesmen or or woman you decide.
And the thing is during an outage you
don't want a yes man. You want somebody
to challenge your assumptions. One
engineer at the retreat said that we
need something called angry agents. Ones
that are specifically prompted to poke
holes in your theory because otherwise
the human and the agent will just agree
with each other while the server burns.
And here's the point. If you're running
a tech company and betting on AI to make
your team faster, you need the pre-
requirements first. you know,
documentation that captures how things
work. Seniors who can architect, not
just code, and a system for architecting
institutional knowledge before makes
people forget how things work.
So, here's what I've learned from
running a dev team in the age of AI
agents. The work isn't disappearing.
It's moving from execution to
supervision. The bottleneck used to be
typing code into a file. That bottleneck
is now gone away. Now it's decision-m
verification and starting off by
specifying clear intent. Think about
graphics programming. In 1992, engineers
hardcoded the maths to draw a single
polygon, calculating the exact pixel
positions. By 1994, the GPU arrived and
the hardware did the polygons
automatically. If you insisted in
handcoding polygons in 1995, you weren't
a specialist. You were obsolete. And the
graphics engineers from those days
transitioned to lighting engineers,
animators or physics programmers. They
stopped telling the computer how to draw
a triangle and moved on to telling it
how light reflects off a street, for
example. Nobody handcodes polygons
anymore. We all work in game engines. I
think software engineering is hitting
that exact point right now. So, if
you're hiring developers or you've got a
team of developers, here's what to look
out for. Don't look for people that can
write code. Look for architectural
thinking. Can they write an a spec that
is not open to interpretation? Can they
write tests and design a test suite that
actually becomes the product? And can
they debug a system that they didn't
write? At Wu, that's what we hire for
now. But here's what keeps me up at
night. In the past, code reviewing
wasn't just about catching bugs. It was
also about how developers learned the
system. So, if agents write all the code
and your team stops reading it, then
they become strangers in your own
system, strangers in your own code base.
When something breaks at 3:00 a.m.,
they're staring at code that was written
by a machine, trying to reverse engineer
the logic while your customers are
screaming. I think the solution is to
force AI to lay out all the
architectural decisions that it makes
when it writes the code and then arrange
for meetings with your senior engineers
to review these decisions that the AI is
making essentially. So there's like a
symbiosis between the architectural
decisions that the AI is making and your
team. So your team is fully aware of
those decisions. And all this has to
happen before the agents write the code
because you've got to schedule time to
understand your own software now. It
won't happen automatically. The speed of
AI demands this. And if you're running a
tech company or you're employing
developers, that's the shift that you
need to see coming.
So look, if you're running a software
team or you're thinking about building
one, the ground is shifting. Senior
engineers are drowning in code reviews.
Junior engineers are smashing out code
at a 10x speed at a rate that the
seniors can't keep up with. And the
mid-level guys are trying to get their
head around starting to write code with
AI. I think the companies that will win
are the ones that will manage to retrain
before it's too late. I am documenting
everything I'm learning building Wii U.
The systems we're putting in place, the
mistakes we're making, the changes we're
making, what works for us, what doesn't
work. If this is for you, hit subscribe
and head over to axelmiss.com
and join my newsletter where you'll get
weekly content like this. And if you're
building software right now in this
exact moment of transition, trying to
figure out what to do with your dev
team, you're not alone. The best tech
companies aren't panicking, they're
adapting. Be one of them. See you in the
next one.
Ask follow-up questions or revisit key timestamps.
The video discusses the evolving landscape of software development due to AI coding tools. The traditional bottleneck of writing code has shifted, with AI now generating code much faster than humans can review it. This has led to a need for new skills, focusing on architectural design, writing unambiguous specifications, and supervising AI agents rather than just writing code. Senior engineers are overwhelmed with code reviews, while junior engineers are highly productive with AI assistance. Mid-level developers struggle to adapt to this new paradigm. The speaker emphasizes that the future of software development requires a shift from execution to supervision, with a focus on clear intent, robust specifications, and effectively leveraging AI as a tool rather than a replacement for human expertise. Companies need to adapt by retraining their teams and documenting institutional knowledge to keep pace with AI's speed.
Videos recently processed by our community