Programming Skills that AIs Cannot Have & How You Learn Them
510 segments
If you're a software developer, you know that
things are kind of tenuous for us right now.
The prevailing wisdom seems to be that software
development as a career is going extinct,
although I think that's largely based on a mis-
misunderstanding, and I'll get more into that
later.
So as I write this, the OpenAI's O3 model was
just announced, although we can't look
at it yet.
And as always, there have been a ton of claims
made about how much better it is at programming
and tasks than any model before.
I'm skeptical that it's really going to turn
out to be that much better, but we should
find out sometime in the next few months.
2025 looks like it's going to be a wild year
for us. But no matter how good O3 might be,
there are things that experts believe that this
generation of AI's just cannot be capable
of, both in general and with respect to
software development in particular.
Today, I want to talk to you about what
developers can learn to do that these AI's can't
and why and how you can go about learning it.
So to get this out of the way, the limitations
I'm talking about are based largely on the
work of Simon Prince, the author of this book,
Understanding Deep Learning, link below.
He did an interview that I've recommended
before on the Machine Learning Street podcast:
"It really has no way to learn anything new,
other than its context vector, which it
forgets every session. And it has no way
to even remember that.
So missing all kinds of parts of the puzzle, I
don't see how you could look at that and
think of a super intelligence is around the
corner."
Although really the whole interview is worth
watching. And the book has been instrumental in informing
my understanding of our current circumstances,
although it's an $80 500 page textbook with
lots of math and a 49 page bibliography.
So I wouldn't necessarily describe it as
approachable, or recommended for everyone.
In the unlikely event that he's wrong, and that
I'm wrong, and then open AI's O3 model
is actually super intelligent, then this
Chanel is garbage, YouTube will soon be replaced
by The
Matrix.
No job is safe, human life is irrelevant, and
all we can do is bow our heads and pray
to Roko's Basilisk that it might not torture
us for all eternity.
More info on The Basilisk below, although you
might not want to hear about it.
What could possibly go wron----
This is the Internet of Bugs.
My name is Carl, and I've been a software
professional for more than 35 years now.
I believe that the Internet has way, way too
many bugs already, and that AI is in the
process of making it much, much worse, and I'm
trying to reduce that as much as I can.
So today's AIs have limitations.
Today, I'm just discussing the serious
constraints they have when it comes to actually
writing
code.
So, One: they can only learn the kinds of things
that are in their training data.
Two: they are frozen in time.
They can sort through an enormous amount of
information in a short order, but they
effectively
have no ability to form long-term memories
without expensive retraining.
If you've seen the movie Memento, they're
basically that guy.
They can read a previous conversation as part
of a prompt at the beginning of a new question,
but that's as close as they can get to
improving. How much would you want the Memento guy working
on a software that you depend on?
Three, they are severely limited in the amount
and breadth of context that they can
work with.
There's a lot of work being done to try to
increase this, but there are a number of
significant hurdles, including what's called
the "lost in the middle" problem, more on
that below.
But even if all the latest mitigation work were
magically succeed, it's nowhere near
the amount of context that you or I are capable
of understanding as humans.
So the things that AIs can't learn are the
things that require a lot of context and that
don't appear in documents scraped from books
and around the internet.
In other words, the things that can't be
learned from classes and books, but only can
be learned from experience.
The good news is, you can learn such things.
The bad news is, it's not really a quick
process.
But from where I sit, software developers have
two choices, either leaning to learning
the things that this generation of AIs can't
get good at, or watch as the AIs chip away
at your job's tasks until there's nothing left
for you to do.
Well, three choices, if you count "Pray to the
Basilisk", but I don't.
Here's the thing anout software bugs: with a rare
exceptions caused by short-term thinking.
usually for cost-cutting measures by some MBA
or something,
Non-trivial bugs on the internet aren't
intentional and the vast majority of them aren't
even found until after they've been released.
Those bugs aren't found by the programmers.
They're found by people using the software,
often in ways and in contexts that weren't
considered while the code was being written in
the first place.
This is the crux of the matter.
The AIs can only ever generate software in the
context that they've been given in their
prompt, but you can learn to expand your
context with experience.
You can look in a situation and recognize the
kind of things that could cause problems
in the future, and unlike the AI, you can find
out what your blindspots are and you can learn
to compensate for them over time.
With practice and the right mindset, you can
develop an intuition for what can go wrong
and how to avoid it.
It'll never be perfect, but you can get better
and better, and it doesn't take that much
work to be a lot better than the AI on this
kind of thing.
Now this isn't the kind of thing that anyone
can comprehensively write about as they're
way too many variables, which means it'll never
show up in a training set for an AI.
The best that we can do is speak about it in
the abstract and talk about it by example.
So here's an example:
Ten years or so ago, I did some time working in
Amazon.com's Logistics division.
Among other things, I wrote the storage layer,
the narrow player, the storage state machine,
the integration test suite, and the core logic
for the system that the delivery drivers use
to drop packages off on your porch.
So early on in the project, I identified we had
a potential problem.
We were required to exchange time information
over the network using "Epoch Time," the number
of seconds in some arbitrary date. This is the
way most Unix systems handle time internally.
But it's a really, really bad idea to use that
on the network.
In fact, a group of smart people realized that
it was such a problem.
They created a standard way of handling dates
over the network that avoids these issues
back in 1988, when I was graduating from high
school. That standard is called ISO 8601.
So I proposed that we used ISO 8601, and I was
told that we at Amazon had to stick with
the corporate standard.
My intuition, based on the 25-ish years of
experience I had at the time, told me that
we would have problems, although I couldn't
guess exactly what they would be.
So I put in some extra code and tests to handle
a couple of issues I could think of,
and then I just crossed my fingers, 'cause what
else I was gonna do?
And sure enough, we had problems.
The worst ones were related to drivers who were
scheduling their shifts for future dates.
The bugs usually appeared when someone was
signing up for a shift while they were either
on vacation or right before a daylight savings
time change.
In the event that the time zone that the driver
was in when they scheduled their shift
was different than the time zone they were
being when they showed up for work, their
schedule would be off.
And when I left the project, that was still a
problem.
Hopefully, they managed to get fixed by now.
My intuition at the time was good, but it wasn't
good enough.
If I thought of the context of shift scheduling
while in a different time zone, I could have
either tried to compensate or used that as a
further argument that we needed to use the
ISO 8601 dates.
Probably still would've lost that argument, but that's
another story.
But that's the kind of thing that AIs can't do.
They can't recognize when they need to change
their context, they can't imagine how user
situations might be different from what they
expect, and they can't recognize that a bug
now is the consequence of thought processes
involved in a decision that was made months
ago and use that to improve themselves and
their decision making process.
In fact, even if the AIs get as good as real
humans, as unlikely as I think that is, the
AIs probably still won't be able to do that
because the vast majority of human programmers
I work with can't do it either, including the
folks at Amazon that refused to change
the corporate standard because they thought it
would be good enough.
I discussed this some in my "starting a side
project" video. You can go here for more
information,
but the thing that most programmers are missing
and the thing that you can't get in school,
is the experience of putting your code out in
the wild, seeing what goes wrong, understanding
what you could have done if you thought of it,
to avoid the problems and then applying
that new knowledge in the future.
Like I said in that video, most developers only
ever work as part of teams, and they
only care about the development side, not the
operations side.
They don't have to deal with the consequences
of their decisions, they don't have to wake
up at two o'clock in the morning when something
breaks, they don't care to learn.
A ton of managers are like that too, and most
big companies are structured in such a way
that they can get away with that.
This is the kind of programmer or co-worker
that you've seen that grumbles or complains
when a bug report comes in and whose preferred
solution is, "I can't reproduce that, so
it's not a real problem."
Now, they're bad programmers, and they should
feel bad, and if your natural reaction
to a bug report coming in is, "I don't want to
have to reproduce that," then you're a
bad programmer and you should feel bad, too.
Bug reports are an opportunity to learn what
went wrong and to learn how it could have
done better and to improve, but most
programmers are that kind of bad programmer.
Most programmers and managers operate on an
"I did my part. The rest of it is someone else's
problem" basis.
Now, this is exactly the same way that the AIs
operate.
When asked, an AI to write code, it spits
something out and that's the end of its
involvement.
Once that conversation is over, it's done.
If later somebody brings the AI a bug in that
code to look at it, it's treated as an entirely
new issue and all the context would have been
lost. It can reread through the previous conversation,
but that's not the same thing.
If you want to be better than AIs can be, you
have to avoid that mindset.
There's a blind spot here that most people,
even a lot of good developers, have about
our work.
Unlike the way that AIs generate code, the way
that schools teach and test computer science
students and especially the way that
programming interviews screen candidates, the
true nature
of the work programming is this: The work does
not just stop.
In fact, most programming work is not "Done" for
years if not decades after the first release.
Even after every member of the original team
has long left the project, the work of
maintaining
that code can continue, maybe for as long as a
human generation or more.
This is the misunderstanding I mentioned in the
introduction.
People, even a lot of developers, tend to talk
about whether or not an AI can replace
a programmer in terms of whether or not the AI
can perform some discrete tasks that the
programmer does on a daily basis. But that's
not the job, at least not if you're doing
it right.
A lot of developers, a lot of teams get this
really wrong though.
A lot of programming is done such that each
code change is treated independently of all
other code changes.
This is the way that an AI would do the work
and it creates a ton of bugs.
It's easy to spot if you look from the outside.
It's what causes a bug fix or a patch or a new
release to have new bugs in it that didn't
exist in previous releases.
Now I'm not talking about a new added feature
that contains bugs.
I'm talking about a thing that used to work,
but it doesn't anymore.
When done correctly, outside of new features,
the number of new bugs in each release should
go down dramatically because what we can do, if
we try, is understand that a bug we're
working on now was caused by, or at least
contributed to, some trade-off we had in mind
when we made the decision some months ago, and
when we not only fixed that bug, we can
learn to recognize similar trade-offs when they
come up again so we can avoid similar
problems occurring in the future.
We actively look for other places that mistake
was made and we clean them up before they
can start causing problems.
In addition, good experienced developers know
to consider the future maintenance effort
when considering any change.
Often, I've had to push back on changes that
seem simple to implement because I knew that
the cumulative scar tissue of adding many, many
little one-off features released after
release makes for a very fragile system.
A human can learn that, but it doesn't fit into
a StackOverflow answer or a Reddit
post or a LeetCode riddle, so the AI's don't have
a clue about it and to the extent that
it's discussed in blog post or videos like this,
it's too abstract for an AI to actually
act upon.
There's also an additional complication that
the AI is particularly bad at dealing with,
which is when the environment around our code
is changed outside our control and we have
to deal with it.
It's very common these days for new security
issues to be found or for new bugs to be
introduced
into the operating systems or libraries or
browsers that our code runs on top of.
When the same code is last time suddenly starts
behaving differently than last time
on the same set of circumstances, it must be
because something else changed, but often
we don't even know what.
The first step when that happens the last few
years is for most programmers is to type
the problem they're having into Google or stack
overflow, and hey, I've done it too,
but there are times when there just aren't any
answers or worse there are answers, but
they're all out of date and they're no longer
relevant and they just waste your time.
Most programmers these days know that sinking
feeling of realizing they have a new problem
that they can't copy-paste their way out of.
Those are the cases that AI can't help with
because the solutions don't yet exist so they
can't be found in the training data.
But I promise it is possible to learn and fix
those bugs.
At the risk of doing the "back in my day" thing,
I had been getting paid to program
for 10 years before there even was a Google.
We did it without Google or stack overflow or
AI back then, and it's still possible at
least when it's needed.
And you learn to fix those bugs by putting in
the work of fixing them.
And then by looking back on how that bug came
about and how you might have prevented
it.
If you've got a job right now, you can start by
looking around at your job for those bugs
that don't have easy answers and volunteering
to work on them.
Get to know the QA and the operations folks at
work. Learn what they're seeing that most
of your team is ignoring. That might work
depending on where you are, although a lot
of corporate structures make that difficult.
In general, the smaller your company and the
smaller your team, the easier it will be for
you to be able to work on these kinds of
problems.
I've spent a lot of time at startups and they
have lots of opportunities to understand
the full product and work on that kind of thing.
People tend to care more also because if the
product fails, we're a lot of a job, but
startups
also have a work-life balance and stress
issues though, so it's not a panacea.
But it is possible to take on that
responsibility even in many larger
organizations if you're
willing to do the work and put up crap from
some people around you.
Another quick example:
This is a photo of a bunch of empty delivery
bags with barcodes on them from my time at
Amazon.
Each one of those represents an actual product
that I bought myself with my own money and
then acted as my own delivery person and
delivered to myself as the customer using the
system
my team was building at the time.
I wasn't required to do that.
In fact, several people, including my managers,
got upset at me about it.
Although since I was spending my own money, the
only way they could have stopped me would
have been firing me because I went outside my
part of the organization and I worked directly
with the folks at the local distribution center
to set up my accounts and my permissions
to be able to do that and I got to know a bunch
of those folks in the process.
They were thrilled to have someone from
development that actually cared about their
perspective
for once and I learned a ton from working
directly with them that I couldn't have
discovered
otherwise.
It made me a much better programmer and it made
our system much more reliable.
I don't know of anyone else had Amazon in
development who spent their own money to test
a product that they were developing, but it was
definitely worth it to me.
So you probably can find a way if you try hard
enough, but if you can't get access or
time or permission to work on those problems or
just don't want to wait, then your best
bet is to build your own project.
The easiest way to do that these days is to
build what's called a "Software as a Service"
or "SaaS" project.
The startup costs are tiny, it's mainly just
your time and labor, the only potential risks
are legal and in my personal experience those
have been quite small.
That might not be true for you, it depends on
too much on your local laws and your employment
contract if you've got one and the only thing I
can advise about all that kind of stuff
is just to talk to a lawyer and maybe an
accountant too.
I know that stinks, but these days it might be
a good idea for everyone to know a lawyer.
Like I said in my "Starting a Side Project" video,
running your own thing-as long as people
are using it so you can learn from their bug
reports and feedback-is the best way to learn
to be a better programmer.
I encourage every single programmer to do that,
but it's even more important than that
these days if you want to be in this profession
long term.
Being a better programmer is useful and
important and I want all of you to get better
because
the better you all get, the fewer bugs the rest
of us will have to deal with, but these
days as the AIs get better and better, being a
programmer who has experience specifically
in the kinds of coding skills that the current
type of large language model AIs aren't good
at and cannot be good at, is a far more valuable,
if not irreplaceable, skill.
So I've got some more videos in the works about
starting and building your own software
as a service project and now that I've wrapped
up my involvement in a data warehouse
consulting
project that can seem most of my time at the
end of last year, I'll be able to devote
more time to this channel and post more
regularly.
So subscribe if you're interested in seeing
more on that kind of stuff.
Until then, keep an eye out for opportunities
to understand how the bugs you see today were
caused by the decisions you made in the past or
someone else made in the past and seek
out the tasks that Google doesn't have good
answers for and remember, the Internet is
full of bugs and anyone who says different may
well be a crappy programmer trying to
find an excuse not to have to fix their crappy
code.
Let's be careful out there.
Ask follow-up questions or revisit key timestamps.
The video discusses the limitations of current AI models, particularly in software development, and argues that human developers can differentiate themselves by cultivating skills that AI cannot replicate. These skills involve learning from experience, understanding context beyond training data, and developing intuition for potential problems. The speaker emphasizes that true software development involves continuous learning and maintenance, not just discrete task completion, which is a fundamental difference from how AIs operate. The video suggests that developers can gain these valuable skills by taking on more responsibility, working closely with QA and operations, and building their own projects, such as Software as a Service (SaaS) applications, to learn from real-world feedback and bug reports.
Videos recently processed by our community