Mitchell Hashimoto’s new way of writing code
3535 segments
What was your experience back then of
AWS? Your honest view.
>> AWS was really arrogant. Felt like they
were doing us a favor. Subtle vibe of we
will spin up a product and kill your
company.
>> Terraform just seemed to be everywhere.
Why do you think that sudden popularity
was?
>> One of the things that frustrated me was
like, oh, they only won cuz they were
first to market. We were like seventh to
market.
>> It feels like most of open source will
have to change because of AI. AI makes
it trivial to create plausible looking
but incorrect and lowquality
contributions. Open source has always
been a system of trust. Now it's just
default deny and you must get trust.
>> Do you think Git will be around in a few
years?
>> What's interesting is this is the first
time in like 12 to 15 years that anyone
is even asking that question without
laughing.
If AI agents can write code, open pull
requests and ship features, do we even
need open source contributors anymore?
Michelle Hashimoto, the co-founder of
Hashi Cororp, has been thinking deeply
about this, the future of open source
and how to efficiently integrate AI into
his day-to-day workflow. Michelle built
the tools that power modern cloud
infrastructure, Terraform, and the Hashi
stack. He also created a popular
terminal, Ghosty, and I consider him to
be one of the most thoughtful voices in
the industry on how AI is changing the
craft of software engineering. In
today's episode, we cover the original
story of Hash Corp, a failed university
research project, a notebook of unsolved
problems, and an email from his future
co-founder that he answered in two
minutes. His honest unfiltered take on
working with AWS Azure and Google Cloud
as partners, both the arrogance and also
the brilliant engineers who never
thought about the business, how he's
adopted to AI coding tools, why he
always keeps an agent running in the
background, and his practical advice for
engineers who have not yet warmed up to
AI agents, and many more. If you're
interested to hear from one of the most
hands-on builders in the industry and
want to know where AI tools are useful
versus not, then this episode is for
you. This episode is presented by
Statsig, the unified platform for flags,
analytics, experiments, and more. Check
out the show notes to learn more about
them and our other season sponsors,
Sonar and Work OS. Michelle, welcome to
the podcast. It's awesome to be here in
person.
>> Yeah, it's it's cool to meet you in
person after so many years of following
you.
you've had such a massive impact on on
the tech industry on software engineers,
but how did it start?
>> I think the high level is the same story
as a lot of people. I self-taught uh
around 12, 13, early teens, motivated by
video games. Same like same as a lot of
people. Um although I really quickly
realized that I liked web, you know, web
was new. Google wasn't out yet. I think
web was new. And so I I kind of like
really quick I I never became a video
game programmer. I really quickly just
became a web programmer, PHP, um Pearl,
that sort of stuff. And uh because I was
so young, the only way I could learn was
through whatever code was published
online. And so that's how I got
acquainted with open source. I didn't
know that's what it was called then, but
a kid with no job, no money. Um parents
didn't want to buy, you know, uh
professional books were like I don't
know what they are now, but they were
like 50 bucks then, right? And and so
they were like, "No way, right? This and
also they didn't believe I was going to
read it." And so there was no way
they're gonna buy that. So, um, yeah,
just anything I find online was my my
inn into coding. I'd walk to school
every day with a group of friends.
There's a period of time where I printed
out the first or second chapter of the
PHP manual. I remember it was about 30
to 40 pages of of paper and I never
programmed. So, all the stuff and I'm 12
is it's very confusing. So, I read the
whole 40 pages every walk to school. And
I don't remember how long it took me,
but I did that a long time before, you
know, I remember this one moment where I
was walking to school where suddenly I
understood
what these dollar sign things were. I
like it like for whatever reason it just
came in.
>> Those are variables, right?
>> Variables. Yeah. Yeah. And I I really
understood I never heard that word
before. Like you don't hear the word
variable as a 12-year-old out in any
context. And finally at one point it
like hit me that they store things and
things could change and I remember just
like weeks of reading this thing and not
understanding it getting to school so
excited being like it it triggered and
then after that I remember stuff
happened really quickly.
>> What what kind of stuff did you build?
Websites.
>> Yeah, websites. It was gaming related
websites. It was like a lot of like
gamech stuff, forum software. Yeah, I
mean I had a lot of fun cloning
websites, you know, in a poorly, but
like uh PayPal was out and then and I
really wondered like how does money get
transferred over the internet? How does
that work? So I tried to build like
copies of cloning websites. I did like
masquerade as a 18-year-old on um uh
like freelance websites. And so I got,
you know, 100 bucks here, 50 bucks here
to do like image like upload stuff. I
decided to study computer science in
college. Um went to University of
Washington. I mean, I guess that's when
you would call it serious, but I was I
was like really I mean, I was coding
every day as much as I could through
high school.
>> Oh, okay.
>> Yeah, that's impressive. Were you alone
with this when your friend group there?
Were there other people doing it or was
it kind of lonely?
>> It was lonely. It was very lonely. It
was It was lonely in the real world and
then I quickly found online friends
through like MSN Messenger and ALS
Messenger and forums. I found online
friends which many I I have met now and
I still keep in touch which was cool but
no I mean like back then I mean being a
being a programmer when no one knew that
word but but being into computers was
like a social death kiss and so uh even
my closest friends didn't though my best
friends and stuff like I hid it from all
of them and I didn't talk about it at
school and stuff like that so it was
just a secret until I went to college
and college is when I decided to like
let it all out. The big like break that
I got was I blogged and uh my freshman
year, late freshman year heading into
summer um after it of college, someone
just emailed me out of the blue and I
kind of thought it was a scam. It was
just like do you want to, you know, it
was do you want to be a Ruby on Rails
programmer? And I didn't know Ruby. I
was a PhD programmer. Um I had never
done Ruby. I'd never done Rails. But I
got this email and I'd never been like
head-hunted before. Like I didn't know
what this was. I was also 18. So I
didn't really know what to think about
it. I probably would have not responded
except that the person contacted me was
in LA and so I did respond and we set up
a meeting like a real physical meeting
and I met him and met the company and
realized this is real and they're
serious and genuine and I took that job
and uh yeah I mean that was I learned a
lot on the job there. So that was a huge
change. Um
>> was it a startup or small company
something like that?
>> No, it was a consultancy. So, it's kind
of like one of those standard like this
like 2007 Ruby on Rails was had blown
up. It was already very popular and uh
there was all these consultancies that
that appeared out of nowhere that was
basically like we'll build your minimum
viable product and yeah and we're one of
those shops. So great job for a college
student cuz we'd see a client for like 2
months and I would build a YouTube style
website and then I would build like a
philanthropy website and then I'd build
an e-commerce website and like it was
just like I got to learn all these
different technologies and different
scale challenges and different like you
there wasn't a lot of scale because
we're building MVPs but different like
thinking of scale problems. Um yeah it
was it was great. How did eventually
Hashi Corp start or what happened
between like getting getting this this
Ruby job to a few years later?
>> It kind of starts with this Ruby job. Um
there was one guy that worked at the the
company and and he's he's pretty into
his privacy so I won't share his name
but he was my boss and there was no
Heroku, there was no engineard so you
had to like self-host and Ruby on Rails
hosting then was kind of like difficult.
So he was the guy who got all these
projects hosted on on dedicated servers
and I didn't know anything about that
and I and he ran Linux and he had long
black hair and he like didn't use a
mouse and all these things that were so
weird to me and I was just intrigued. I
just he sat in the corner. He didn't
want to talk to anybody. Um, and I just
wanted to know more about what that
world was. And luckily, despite
appearances, he's very nice. And so, um,
yeah, I I think as soon as I showed a
genuine interest, started asking a lot
of questions, he started just giving me
challenges like, well, the first
challenge I remember he did is he
unplugged my mouse. And it's funny cuz I
I don't think there's an era of time
where if you did that, it probably would
have been some kind of harassment or
something. But he he literally said
unplug he unplug my mouse and said
you're never going to work with a mouse
again. So figure it out. I'm not going
to tell you how. Just unplug my mouse,
restart the computer, your problem now.
And took the mouse away.
>> Mhm. Um took me about a week and I got
really good with the keyboard.
>> Harsh lesson.
>> Harsh lesson. And once I got good with
the keyboard, um he said, "Okay, here's
he he installed screen on my, you know,
early team. He installed screen in my
terminal and said, "Figure this out.
You're going to use this now." you know,
there's no questions like you will use
this and and he just slowly instilled on
it um on me and as we got there then it
became you know here's SSH here's a
package manager he's like it slowly
taught me more and more and that got me
just in I loved in like immediately it
was like this is super cool super fun so
that long-winded process got me into
infrastructure and then simultaneously
or very shortly afterwards I joined a
research project at the University of
Washington called the Seattle project
which is a terrible name cuz you can't
Google it, but it's called the Seattle
project. It was I'm sure it doesn't
exist anymore. And it was again another
popular thing during this time was uh
kind of like um folding at home. It was
this idealized folding at home which is
can a bunch of people compute of
different you know it could be your home
machine it could be a unused rack it
could be in your basement it could be
around the world but can you comp you
donate all this heterogeneous hardware
and then can you generalize auler on top
of it so that academic institutions
across the world could just run
workloads and not just like not just
research like the job I got was
basically to very vaguely to create not
the scheduler component but like create
the ability to spin up all these nodes
um and and and a bunch of other stuff.
It's very vague but but it was this
infrastructury problem and I completely
failed at it. like I I tried for a
quarter but um from a technical side I
just failed and and I wrote down on his
notebook like what I thought the pieces
were missing that I couldn't solve this
problem in a quarter in a 10e period
like why well we need this we need this
we need this it's interesting to see how
structured Michelle was in his approach
in defining components that would later
become parts of the hashy stack and this
leads us nicely to our season sponsor
work OS one thing I've learned from
studying great engineers Michelle
included is is that they're very
deliberate about what they choose to
build. Great engineers don't just ship
fast. They think in systems. They
understand leverage and they're careful
about what becomes part of their
long-term service area. If you're
building SAS, especially an AI product,
authentication, and enterprise identity
can quietly turn into a long-term
investment. SAML edge cases, directory
sync, audit logs, and all the things
enterprise customers expect. Works
provides these building blocks as
infrastructure so your team can stay
focused on what actually differentiates
your product. Great engineers know what
not to build. If identity is one of
those things for you, visit
workwise.com. And with this, let's get
back to Michelle's notebook with all the
components he would end up building at
Hashior Corp. And I still have this
notebook um at my house here, but the
problems are really like, you know, I
have no way to declaratively manage the
different resources that are out there.
I have no way to network these together
in a private network. Um, you know, I
wrote these things down and there was a
lot of stuff there that I never ended up
building, but a subset of that was
ultimately what Hashorp would end up
building. And I shared this with my
undergraduate like boss who was Arman
who was my co-founder.
>> Y later became your co-founder.
>> Yes. He was the my my boss on the
undergrad side. And I shared it with him
as kind of an exit interview like this
is what it is. And then some period of
time passed, not much, weeks passed and
he emailed me out of the blue and was
like, "Do you want to do a startup
together?" That, you know, you're a
teenager and you have no idea what this
commitment is.
>> Well, you're like 21 or something at
this point.
>> Uh, probably not even. Probably probably
19 or 20. Yeah. And he emailed me out of
the blue like, "Do you want to do a
startup?" Like person you never met or
you barely met, never met personally,
like all this stuff. It's so funny. And
he emailed me that at like 11:30. You're
in college. I emailed him back in 2
minutes and said, "Sure."
and he remembers thinking, "Wow, yours
thought it so fast that he's just in.
He's ready to go." That was sort of the
start of our friendship. And then uh and
uh again like there's overlapping pieces
here, but I was also at the time working
on something called Vagrant. And Vagrant
was, you know, came out of the
consultancy less the less the research
project. It was solving the problem in
this consultancy where we had new
clients every two months and we had
different teams. How do we create
reproducible dev environments so I could
go help somebody without a lot of
billable hours? So, so this is a
development environment that you could
spin up quickly, right?
>> Yeah. Yeah. Yeah. The metaphor I always
had was I didn't use Windows then, but
the metaphor I always used was how could
I doubleclick and open a dev?
>> Yep.
>> That was a metaphor I used because
>> it's a good one.
>> Yeah. What what what the problem we're
having was
>> any hour waste in a consultancy that you
can't build is just a waste. And so it
was basically like if somebody else is
behind schedule, how can I jump in, help
implement a feature, and jump out? And
we were in that era, just setting up the
dev environment for a project might take
you half a day.
>> And you couldn't build that for the
client, right? The client would only pay
for the work.
>> Yeah. You couldn't build that for the
client. So it' be like 4 hours of work
wasted and it would probably mess up
your dev environment for your actual
client because you would be a different
Ruby version, a different Rails version
and so you would kind of destroy both
ends. And so vagrant came out of that
which was I just need to go over there
and what ended up becoming vagrant up
sweet you know few minutes let's help
you for the next two hours and then
>> and how how did you build it back then
was it some kind of virtual machine or
>> yeah I was with virtual box virt oracle
well it wasn't it was sun then but um
virtual box and and that's that's
another cool constraint which is that I
was a college student so I had no money
so
>> this was expensive back then right
>> uh virtualization was expensive virtual
box was free and open source I don't
care about the open source side um for
that. I was I was never going to read
it. But yeah, it was free. That was why
I did it and and that's why I did that
and not like EC2, which should come out
by then, but I didn't do EC2 cuz I I
didn't have money to pay for these
instances. So, um yeah, that's that was
the constraints and and I like bringing
that up because I think so much of
software engineering is understanding
constraints and working with these
constraints. And your prior podcast
there were, you know, called the forces
like static and dynamic forces. It's
that and and I think that helps create
better software um when you have
constraints and that was my constraint.
So yeah, so that was we have vagrant, we
have this failed infrastructure project.
Um we have uh sort of the my boss at
consultancy getting me into
infrastructure and all of the and then I
mean externally we had the cloud being
introduced AWS I I went to school
University of Washington so
>> oh
>> I was right there
>> right in the epicenter of it Amazon was
next door right
>> Amazon was right next door they donated
a bunch of credits right away I knew
about the launch um most of the CS
students at UDub interned at at Amazon
not necessarily AWS but also including
AWS but all over Armon on uh interned at
uh AWS and so like I I was in the bubble
of like cloud cloud cloud AWS as a when
people were pronouncing S3 like s cubed
like people didn't know how to pronounce
it right that's how that's how new it
was and so yeah all this stuff kind of
came together and uh kind of led me on
the path to to build tooling to better
manage it and
>> at that moment in time when you saw
cloud you know you saw it was being big
did you know or have a conviction that
it would be big or as big as cloud has
become cuz this was I'm just trying to
put yourself back like this was very
very new back then right
>> totally
>> and and I think you know like if if if I
imagine I assume more people would have
been skeptics or think that is just a
fat or whatever what was it like can you
can you bring us back a little bit there
compared to the to the to today it was
very unpolished I guess as I describe it
you know like EC2 was I mean I used AWS
in general was very unreliable um S3 was
the only ever reliable piece everything
else was was totally unreliable. Um, and
there was only a few services like EBS
didn't even exist when we started. So,
there was no durable storage besides S3
when when I first started with it. It
just felt very raw. Um, and I don't I
don't I never really viewed it as this
is going to be big. I mean, eventually I
thought it was going to be big. What I
viewed it as is this is the better way
to do it. This feels like the better way
to do it. Just Yeah. at a base level
like whether this wins or loses in the
realm of markets and social like
popularity I don't know but this felt
good and so that that's what kind of
pushed me towards it is and I say this
over and over I'm I'm really motivated
by like what's the most fun and what
like feels right and that it just felt
right to me. Um I think where I started
making the bet, me and Arman both
started making some kind of bet was not
just when we started Hashor but we
started Hashorb on the basis of like
multicloud and
I really like to like contextualize that
at the time we were starting this which
was like 2011 2012 which is that AWS was
huge Azure didn't really exist and
Google cloud didn't really exist. There
was Google App Engine, right? It wasn't
even cloud.
>> Correct. Correct.
>> I I used to use that when it was App
Engine. Yeah.
>> Yeah. Yeah. And so in that context, as
we were pitching these cloud agnostic
tools, I mean, we got a lot of raised
eyebrows being like, "This is a waste of
time because AWS is the only player in
town." And our conviction was at that
point cloud is going to be huge and
anything that's economically huge, other
people want piece of that pie. And so
you're not going to just have AWS. it'll
be huge, but you're going to have these
others pop up and Microsoft is not going
to sleep on it and Google's not going to
sleep on it and who knows who else and
who knows and that was our conviction.
That was our bet and uh it mostly played
out that way.
>> So when you decided to start Hashi Corp,
you had Vagrant like was the idea to
like you know like invest in
commercialized Vagrant and did you go
out to raise money or did you start
doing it with you know bootstrap? How
did that go?
>> It wasn't to commercialize Vagrant. But
what we had done is Arman and I both
worked at this mobile ads company
startup. Um there's like less than 30
people and we had built like with Python
and C like these really um rough
prototypes of these ideas that I had in
this notebook of like service discovery
and um like an early version of
Terraform we called Launchy. So we did
DNS based service discovery service
discovery by connecting an off-the-shelf
DNS server with Postgress and we did all
these like hacky things but they felt
good u and again we like get back to
this like how things feel to me to
motivate me like it felt right
directionally right. I graduated the the
environment in Seattle was not very
startup heavy at the time. It was
basically everyone was like are you
going to work for Amazon or are you
going to work for Microsoft? Yeah, that
was like kind of it and and like to a
certain extent Facebook was starting to
show up up there, but that was it. I
knew I wanted to work for startups. So,
I had I I moved to to San Francisco. So,
I moved to San Francisco, found a
startup that would hire me, which was a
mobile ads thing. Um and uh just wanted
to learn. So, that that's the short step
there. So, I ended up in San Francisco.
Um and I convinced Arman was actually
going to do a PhD at Berkeley and he was
accepted and in and he was this a huge
deal, huge deal. I mean, incredible
program. Um, and so he was going to go
there and he would have done amazing
things there. But I convinced him to
join this mobile ad startup. He actually
took a year to ferment on the PhD. He's
like, I'll give it a year.
>> Yeah.
>> I'll join this mobile ads
>> and I'll go back for sure.
>> If it doesn't work, I'm going to go
back. And what ended up happening in
that year is is now where we get to. Um,
which is that we had this these these
this hodgepodge of proto prototype tools
that felt right. and we were going to
all these little startup mingling
parties, you know, it's like things like
GitHub drinkups, but also just like our
this is such a San Francisco thing and
that's why I think it's even though I
don't want to live there again, it was
so magical at the time. Um was like
across the street was this company that
was called Zimrite at the time
ultimately became Lyft and they invited
us over to get drinks and have pizza to
demo this new app with a mustache that
like didn't have a name and
>> Wow. Yeah. So, like stuff like that.
>> You were there when I was born.
>> Yeah. Yeah. Yeah. And like that happens
all the time. Like all the time in San
Francisco and and it's not unique to me
at all. Like Yeah. There there's a bunch
of stories there that I think aren't
worth getting into. It's just like it's
fun. But I went to all these things and
people would just talk. They're all
bunch of tech guys, right? And and you'd
be like, "What what are you working on?"
And and there's two things I realized.
One is all these companies are cloud
first. They're all just adopting AWS
first. There was no there was no
dedicated
>> this was like in in 2011 2012 or so like
like they they just like went and paid
for paid for cloud which was brand new
right the previous generation just had
had onrem I repeat but server rooms and
and server admins they had roles for
those all all that jazz
>> that was just gone gone like
>> that must have been a massive shift
>> I I literally can't think of one social
event I went to where there was somebody
that had dedicated servers the only one
is
>> Twitter yeah but I I think you like we
probably have to emphasize that that
This is a massive shift in the industry,
right? And it probably was was only
happening in Silicon Valley or like
>> probably
>> yeah well well ahead of of everyone else
>> at a scale that was larger than anywhere
else. It was probably in Silicon Valley.
The joke used to be cuz AWS is so
unreliable. The joke used to be that
when AWS went down uh all these startups
finally became more cash flow neutral
and they would lose less money. Um so
there would be like a huge you know US
east outage and and everyone would be
like are you going to migrate regions?
like no, we're saving money right now.
But yeah, getting back to it, uh,
everyone was cloud first, cloudorn,
cloud native, whatever you want to call
it. And, uh, the other thing was they
were hitting all the same challenges
that we were hitting and they didn't use
our tools cuz they were just like
internal prototype tools, but
>> but I knew that our tools felt good. So,
I had these two things come together
where I had some ego, some hubris where
I'm like, I'm pretty sure we're building
the right thing along with I think the
industry is moving in that direction and
like we could we could come together.
Um, and so that led to let's start a
company based around that. The fact that
I had Vagrant was more of like a
industry respect. I mean, Vagrant wasn't
that big then, so that's not saying
much. Um but it was it I just had some
foundation publicly with to give some
credibility to head in this direction.
Um that was about it and we we started
Ashp.
>> And then when you decided you
incorporated you know got the things did
you decide to raise money cuz again back
then I guess it wasn't as common wisdom
you know why cominator was probably
starting around that time. So like
startups were startups a big thing or
was it a given that okay if you start a
startup you're going to raise money
>> in my social bubble it was pretty much a
given. Um, and and not not just that.
So, I we incorporated um I self-funded
um I I I transferred $20,000 from my
savings account into this corporate
account, initial funding. Um and I
worked off of that. I didn't I paid
myself $0 for the first 6 months. So,
the 20,000 was purely towards whatever
things the company needed. That was the
first 6 months. And then Arman joined
after 6 months. Um and and we decided to
raise uh and the motivation there really
is there weren't many there weren't many
other options. There are basically three
options as I saw it then which was uh
bootstrapping um right just like build
something make money and as it becomes
affordable continue to grow reinvest and
grow bootstrapping VC on the other side
and then in the middle was like what I
called patronage which was not not like
not not Patreon style stuff today like
that infrastructure didn't exist there
was no subscriber donate type
infrastructure then um patronage was
more like you might be able to convince
a company like VMware to pay your salary
for you to work on some idea and the
best example is Reddus at VMware and
yeah and we kind of laid out this plan
that we wanted to do um which was which
at inception of the company included
Terraform console no it included
everything but Vault uh Vault came a
little bit later and we looked at that
and said if we bootstrap this even if we
hit it out of the park this is going to
take us like a decade just to like build
the software and that's in Best case
scenario, this is just going to be slow.
And and the problem with slow is that
things have a window and and cloud was
growing so fast that if we were that
slow, someone else was going to do it
their own way. I mean, that was I guess
that was the primary issue is we really
just wanted to go fast.
>> You need you knew you needed to.
>> Yeah. I needed to we needed to hire many
engineers right away and start building
right away. And so VC was the route we
chose.
>> Can you talk us through the the first
several products and and what they do?
you know, we we know Vagrant, but just
for those who are less aware of of what
what became the hashy stack later,
right?
>> Yeah. Let me see if I can still get
these in order. I'm pretty sure I can.
So, this vagrant was predated it. The
first product that came out of Hashore
itself was a product called Packer. Um,
kind of understated
publicly, but kind of underpins a lot of
things in the industry to this day.
That's an image building uh tool. So,
building Amazon images, VMware images,
etc. Um, I'm not even sure how much like
publicly came out, but there are whole
cloud like multi-billion dollar cloud
platforms that all of their official
images are like the service images are
built with packer. Everyone was trying
to utilize this horizontal scaling
autoscaling nature of AWS. That was the
dream. And if you were, it's kind of
like the u what the cold star problem
with serverless today. If you were
waiting tens of minutes for your server
to be ready, you couldn't react. Um, and
so my idea was do that, snapshot the
image and then next time just spin up
that image. Um, and so that was Packer.
>> That was Packer.
>> So Vagrant Packer. The next one that
came out was console. Um, console was uh
solving the networking problem and not
networking. It was more solving the
service discovery problem which was you
have all these machines coming and
going. Before, again, like to
conceptualize this, before you would
have a static set of machines that had
IPs, and you would probably use DNS or
something, but the IPs didn't change
that much. So, you could be like, "Oh,
my database is here and it's not
moving." But if you're in this world
where web servers and load balancers and
databases are just breathing, you know,
they're that's how I always describe
breathing. They're they're creation
destruction. Creation destruction like
constantly. Then things are happening at
a scale where the service discovery
needs to be much faster. Um, and not
just faster, but you want to be have
better guarantees that when you get a
response that oh, it's at this IP
address, so that IP address is like
ready. It's not just Yeah, I I think
this is also kind of more mainstream
with like Kubernetes readiness checks
and health checks and things like that.
It was it was bringing that to more like
physical server or cloud servers,
virtual machines and things like that.
And so that was console. Then after
that, I think we did Terraform. Um,
Terraform uh spins up infrastructure
code. Describe your infrastructure in in
AWS parlament. It was things like all
the attachments to your EBS volumes,
gateways, VPCs, subnets and like
connecting them all together. Like the
idea was I wanted to have an empty AWS
account or any cloud account and I
wanted to have this text and I wanted to
say make this text reality and that's
what Terraform is and and you would wait
whatever amount of time it took AWS and
you would blink and you would have
thousands of resources and then you with
one command again you could just tear it
down to zero. That was Terraform. So
that came out like 2014. Um so that was
the next thing. Uh and then was vault.
>> Yep. Um, vault is is is the easiest to
describe as secrets management as core.
Secrets management encryption grew to do
a lot more things for that.
>> So it's like like well we have like our
on your local developer machine you have
you have like your environment variables
and doing that at scale at a team level
at a company level at a service services
needs to access all these stuff
securely.
>> Yeah, it was much more focused on the
like the the production environment
secrets. Um, I had dreams and visions of
really solving the developer secret
problem, but Vault really never never
did that well.
>> Mitchell just talked about secrets
management, which turned out to be a
pretty important focus area for him. In
general, security is both very valuable,
but also pretty hard to do well. This
leads us nicely to our season sponsor,
Sonar. Looking at where we are today,
we've now moved past tap completion into
the era of Agentic AI. Autonomous agents
are opening poll requests. One big
question. How do we get the speed of AI
without inheriting a mountain of risk?
Sonar, the makers of Sonar Cube, has a
really clear way of framing this vibe
than verify. The vibe part is about
innovation, giving your teams and your
AI agents the freedom to build and
iterate at high velocity. The verify
part is the essential automated
guardrail. As agents start contributing
more of our codebase, independent
verification that checks every line,
human or machine generated, against your
quality and security standards, is more
critical than ever before. Helping
developers and organizational leaders
get the most out of AI while ensuring
quality, security, and maintainability
is one of the main themes of the
upcoming Sonar Summit. This isn't just a
user conference. It's where devs,
platform engineers, and engineering
leaders are coming together to share
practical strategies for this new era.
I'm excited to share that I'll be
speaking there as well. If you're trying
to figure out how to adopt AI without
sacrificing code quality, join us at the
Sonar Summit. To see the agenda and
register for the free virtual event on
March the 3rd, head to
sonarsource.com/pragmatic/Sonarsummit.
And with this, let's get back to Hashi
Corp and why the company decided to
raise 6 months after founding. But yeah,
it it's just basically like, yeah, where
do you store your secrets? And and the
secrets were not just I I forgot the
words I use to describe this, but
secrets were not just like passwords,
but it was also like PII. So how do you
protect emails and addresses and stuff
for your customers
>> or credit card numbers?
>> Credit card numbers. Um so vault was
core to all of that and continues to be
>> that that is a part to build something
like that.
>> Yeah, we were really scared when we
built that actually cuz um we kind of
hid the fact we never lied about it but
nobody on the team that build vault had
more than one quarter of security
undergraduate security experience. There
was no professional security engineers
from industry. There was no professional
security academics and uh yeah we built
it. We got a lot of audits because of
that. Like we were scared. So we did get
a couple we for us it was very expensive
as a startup. We paid a couple firms
tens of thousands of dollars for vault
0.1 to audit it. We paid two. Um we got
we shared the early beta with a lot of
people who were security experts in
order to review it. Not publicly, just
privately. Um we got a lot of good
feedback. Um but yeah, we we didn't want
that exposed in a sense. So yeah, I I I
understand, but I mean it kind of
validates that you can build good stuff
with I guess people who might not have
the experience, but I guess people were
learning, right? Like yeah, the security
stuff ended up we you know, we really
quickly hired professionals that helped
of the product and and the security
stuff was always pretty solid. Um, but
but I think what it really showed was
what the security industry needed was a
shift in user experience more than a
shift in like what it did because like
what we were doing was not fundamentally
different than
existing multi-undred million billion
dollar companies that already existed
but the experience the way you interface
with it was dramatically different and
that was I think a good example of that.
Yeah.
>> And after travault came
>> Nomad
>> Nomad. Yeah. Nomad which was our
scheduler which was a couple years late
for to the market. Yeah.
>> What was you sayuler was it not an
orchestrator?
>> I always described it as scheduling.
>> What what did it do?
>> Simple thing. You have a pool of
compute. It finally solved that problem
that I had in undergraduate. You have a
pool of compute. You have an app that
has a certain set of requirements and it
needs to find a place to run it.
>> Yeah. Yeah. The the underground problem
we talked about. And as you're building
out like these, you said like some of
these took years like how did the
business like hash corp as a business
work? Like did you did you start to
generate some business?
>> There was so so like all right tell me
about this one.
>> Yeah. I I think we waited too long to to
develop a business but um for four years
there was there was actually revenue
from a couple random sources but there
was no real reproducible growing
business. So you were just building this
vision of of the the you know your your
the founders vision of like all right we
need all these things that would have
taken like a decade bootstrap let's
build it
>> build in 5 years and figure it out.
>> That was literally it.
>> Yeah that was literally it and and you
know it was it was all open source and I
always had this mentality which was
which was like if the company fails it
doesn't matter because if there are good
ideas the open source community will
just continue. Um, and so I don't think
I would ever tell that to my investors
at that time, but you know, I had this
idea which is like the technology was
the most important thing to get out into
the world. Um, the business I really
sure hope we could figure it out, but
it's not the most important thing. And
for those engineers who are thinking of
becoming founders or you know might
might be founders, how did this work
with your investors? You know, when they
put in money like did they get some
board sees? Did you have to you manage
manage expectations? cuz kind of I'm
hearing just putting a bit of my
business hat on is like you know for 4
years you're building these cool things.
You don't exactly have a business plan.
How did that work or or they just
believe that eventually you guys will
figure it out or or they saw some kind
of traction with like open source. It's
traction and I don't think what we did
was atypical for Silicon Valley. So the
really broad handwavy way I like to
describe it is you know your seed is
about building the product. You don't
even know if it's product market fit.
you're just guess not you're you're
making educated guess but you're
building something getting the A you've
sort of proven hints of product market
fit but you definitely don't have it
yet. You've proven hints and then when
you get the B you've proven product
market fit and now you haven't really
proven like repeatable revenue. You've
you you now have hints of revenue but
you know the product is useful. You know
people like the product and want to use
the product and maybe want to pay for
the product, but you don't know exactly
how to get everybody to pay for the
product. and then CD and so on is just
continued to build the repeatable
revenue machine. And so with that
framework in mind, we were on the right
track. It was basically like to build
the product. Um we had clear product
market fit by the a um in terms of the
open source, right? We had millions of
downloads, a lot of stars on GitHub, all
sorts of signals that showed that this
was resonating. We had zero revenue. And
so, you know, it was raise money and
slowly slowly get closer and closer to
solving the business problem. And I
think we were just a year or two late or
like later than the average startup, but
the general key frames were the same,
just on this slightly wrong timeline, I
guess.
>> And then when you decided to do a
business, this was you already had the
the hashy stack and then you built
managed offering. I remember.
>> Yeah. Our first foray into
commercialization was a total failure.
It was this
>> Oh, really?
>> Yeah. We had this product that some
people You had You would have to have
been a diehard like Hashukore product
fan to know this, but we had this first
product that was called Atlas. And the
idea was commercially shipping the
vision of running all the products. And
so the, you know, there's a couple death
nails there. Um, one of them was that
you had to run all the products. And so
if you were just like a vault user, you
had a really impossible time buying or
buying into our commercial product. And
the second was just that it was just a
huge problem to like attach on to
regardless of the adoption required.
You're trying to solve the problem that
multiple different buying organizations
in a company were fighting over. So like
even the people who had adopted all our
tools, we ran into the problem of who
pays for it.
>> It wasn't as simple as engineers paying
for it.
>> Correct. And I think um one of the
lessons that I would have, you know, I
would have for engineers that become
founders that don't have a business
background. One of the tough lessons I
had to learn is that companies want to
pay for software, but they will fight
over whose budget owns that.
>> Budgets are important, right?
>> Yes. So, the budget has to exist and if
it looks like a networking problem,
they're going to say, "Oh, networking
should pay for that." They so I have
more budget to buy my other toys that I
want
>> or I can hire more people or uh yeah, it
could get broken down into like vendor
budget. So, it could already be
earmarked for external purchase. Yeah.
So, we have this product that was like,
does security pay for it? Does
networking pay for it? Does um
infrastructure play pay for it? Like,
does dev tooling pay for it? Like, where
does this go? And it's just that
Spider-Man mean where everyone's
pointing at each other. Ultimately, you
don't sell anything. And so, that was a
failure for that reason. So, I don't
remember the total time we chased this
down, but uh we had a board meeting for
sure on a Friday, and board meetings
were usually on Fridays. And and we had
this board meeting. We're based in the
city of San Francisco. Um board meetings
were an hour south in in um real Silicon
Valley. And it didn't go well. It wasn't
there was no no yelling. There was
nobody saying you guys are messing up.
There was nothing like that. It was just
the way I describe it is when your
parents aren't happy with you, but they
don't have to say that they're not happy
with you.
>> You know,
>> but you know they're not happy with you.
We had this board meeting. We drove
home. our mom and I complete drive home
was silent and nor Friday it's Friday
night so usually what we do is we'd go
straight to our mom lived in the city
and I lived in LA already but we'd go
straight back to where Arman's place and
just like have a glass of wine debrief
talk through things and we didn't talk
on this car ride home Arman drove
straight to the office I didn't question
that uh we went into the office um sat
at a table not much larger than this
only difference was there would be a
whiteboard
I think one of us at that point said,
"Well, that didn't go well." Uh, we both
knew it. We didn't feel good. And, uh,
the the sequence of events here is now
very fuzzy. But at a certain point, we
decided,
let's play this experiment where if
there was no sunk cost, if we were
starting from scratch, what would we do
differently today? We whiteboarded all
this stuff out. What we whiteboarded out
was per product enterprise products and
doing vault first and all this stuff. We
wrote it out, spent some amount of time
there. It's still Friday. It might be
Saturday in terms of the time of day,
but it's still Friday. I think it was
Arman who looked at the board and goes,
"Why don't we just do that?" Like, why
not? Like, and and and I was like,
"Yeah, why not?" So, we decided over the
course of that weekend to just throw it
all away. Just throw everything we were
doing before way. We had two paying
customers. We're like just breach
contract. I don't know. Like figure it
out. Like get out of it. We're done. And
we convened an all hands meeting on
Monday. Probably only about 20 30 people
in the company that time, but we
convened an all hands meeting over Zoom.
Um I think and we might not have used
Zoom then, but whatever video chat. And
we said, "Okay, we're switching
directions. We are now enterprise as our
customer open core per product." We
would have this open source and we would
have a forked version internally that
had closed source features. Yeah, it was
a fork but yeah um open core business
model. Arman and I thought people would
quit like we thought we would lose like
we don't have exact number. We thought
it would uh shatter some level of
confidence and like wow these these guys
have no idea what they're doing. We
didn't have any idea what we're doing.
Um, and you know, yeah, open core even
then had a bit of a like icky taste in
people's mouth. And so like we thought
people would just like philosophically
quit being like, "No, I came here to
work on open source. I'm not going to do
open core." Um, enterprise was kind of
just like a sudy boring thing. There was
like so multiple facets of why people
might quit. Nobody quit. Uh, the vibes
in Slack were amazing, super positive.
>> Oh, what happened? You think like why
people internal
>> we asked about it in oneon ones and
follow-ups. We asked about it and it was
really like everyone was kind of just
like buzzing that we had a clear
direction and a conviction and you know
there was fear of the unknown. But but
before there was this feeling of like
we're just we're just throwing darts at
the wall and doing this thing and we
don't know who exactly who our customer
is. And there was all this uncertainty
in a different way. And now it was like
we don't know if this will work but at
least we're just gonna sprint towards
this like there's these clear things
which was like definitely enterprise
definitely open core definitely vault
like all these things were set in stone
that gave us a different set of
certainty that suddenly the company was
like let's go um so yeah nobody quit it
went super well and um we started I
don't the I don't know the time of year
but it was it was like in the fall we
built Vault Enterprise um by the new
year within like the first quarter of of
trying to do sales we could just
tell that it was different. It wasn't
like obviously successful yet, but just
the caliber of conversation we're
having, the the distance we were getting
in the buying process and the speed
we're doing it, it just felt different.
>> And what was different of this approach?
Yeah, I mean part of it just comes down
to like the classic startup like listen
to your customer and we we should have
listened from the beginning because uh
our potential customers were were
screaming at us to do what we ended up
doing which is we would give these
pitches about adopt all the products and
buy this pie in the sky thing and and
there were so many meetings where
someone would be like okay I'll think
about that but how do you replicate your
secrets involve you know they would just
like ask these questions where if you if
I was just listening I was so blinded a
lot of us were blinded, but I was so
blinded. If I was just listening, I'd be
like, "Wait, a lot of people are asking
about secrets replication." And that's a
at scale problem. Maybe we could close
source that, right? Like that's what we
ended up doing is that was our first
feature with with secrets replication.
Uh not even across data centers. The
first feature was just like a cluster of
vault servers in a single uh region. you
would sell this more focused product,
but now kind of the problems I talked
about earlier, security was definitely
the buyer. There was an obvious budget,
obvious person you were talking to.
There was a feature that it resonated
with that scale. And so we were just
having much higher quality meetings in
terms of uh getting this done. Michelle
just talked about how Hashorp managed to
build a product that enterprise
customers cared about and wanted to buy
because it resonated with their scale.
This brings us nicely to our presenting
partner for the season, Statsig. Statig
offers engineering teams a tooling for
experimentation and feature flagging
that used to require years of internal
work to build and is especially
important at enterprise scale. Here's
what it looks in practice. You ship a
change behind a feature gate and roll it
out gradually, say to 1% or 10% of users
at first. You watch what happens. Not
just did it crash, but what did it do to
the metrics you care about? Conversion,
retention, error rate, latency. If
something is off, you turn it off
quickly. If it's trending the right way,
you keep rolling it forward. And the key
is that the measurement is part of the
workflow. You're not switching between
three tools and trying to match up
segments and dashboards after the fact.
Feature flags, experiments, and
analytics are in one place using the
same underlying user assignments and
data. This is why teams at companies
like Notion, Brex, and Atlastian use
Statig. Static has a generous free tier
to get started, and pro pricricing for
teams starts at $150 per month. To learn
more and get a 30-day enterprise trial,
go to statsic.com/pragmatic.
And with this, let's get back to the
episode and what came after they built
Vault.
>> And I get asked on the open source side
all the time, but these buyers, like
corporate buyers, do not care at all
about open source. They don't care at
all. Like, they need a commercial
agreement. And so the the closed source
nature of it like some people needed
like legal protections around like code
escrow in terms of downtime and stuff
like that. That was about the extent of
it. Otherwise they were like you know we
need support. We need proof of concept
to prove it works. Um we need some white
papers in terms of like other customers
scale blah blah blah. Um and yeah that's
what we had to build up after that and
get going.
>> And then so you started selling at Vault
and then you did it for the other
products as well, right? Yeah, we did
Terraform and we did console. Um,
we had it for all the products but but
you know all this data is public. You
could look at it in and the well for a
period of time it was public. You could
look in like the public reports of when
Hosk was a public company. Um, you know,
it really broke down to all Terraform.
>> One thing I I remember is Terraform just
became so so so popular across the
industry. So like you know like there's
a hashy stack but I I only layer that
all the other parts existed cuz like
Terraform just seemed to be everywhere.
Why why do you think that sudden
popularity was?
>> It's so funny to hear that because I I I
accept and know that now and I feel the
same way that you feel now that
Terraform is this huge thing. But for
the longest time like we were the
vagrant company like all the other tools
were like no one knew the other tools
and not only that like Terraform uh I
one of the things that kind of
frustrates me I haven't heard it
recently but for a period of time one of
the things that frustrated me was like
oh they they only won because they were
first to market. I hear that a lot and
we were like seventh to market. Okay. So
like
>> to market in in in what category?
>> In terms of that infrastructure as
codes.
>> So there were like other like players
who you know
>> so many Yeah. Yeah. And and no one was a
clear winner. It was a waring market.
But like that first year 2014 when we
came out Terraform I you know at that
time one of my marketing strategies was
I was at every conference I I went to I
traveled an obscene amount. I was
speaking wherever I could, but even if I
couldn't speak, I was going just to talk
to people. Um, and there's actually a
little anecdote here was when the COVID
lockdowns happened in March 2020, my
wife and I had nothing to do at night.
We didn't have kids yet. And we opened
up our calendars and we realized that it
was a we had been dating since 2012 and
the first time in in almost 10 years of
our relationship that I will have been
in the same place longer than 8 days.
No, for for almost 10 it was at 9 years.
For 9 years straight, I had been
somewhere different at least every eight
days.
>> That's how much you traveled.
>> That's how much I traveled. Yeah. And I
know there's consultants that travel a
lot more and stuff, but like I was
traveling a lot. I was coding a lot. I
was like doing all these things. Um
>> you must have like coded while you
traveled as well.
>> All all the time. Yeah. I had a whole
system. When I started traveling,
inflight Wi-Fi didn't exist.
>> Yeah. Yeah. Exactly.
>> Even now it's kind of patchy.
>> Yeah. So I wrote these scripts that I
end up iterating on but mostly used. um
where I downloaded all the GitHub issues
and I categorized them and I would just
break it down into tasks that none took
more than 10 to 15 minutes and I just
created this list and and when I was on
the plane I would just one by one bust
them out.
>> Uh there's no internet so just commit
them locally.
>> Yeah.
>> And then I would get back and and some
people used to notice this cuz I would
land and you would get this push and
people would get these email
notifications where like 30 issues were
closed all at once.
>> Wow. But I found the key was
pre-planning what issues you were going
to work on. I did that online on the
ground.
>> Yeah.
>> And then breaking them down into
15-minute chunks because I found it was
really hard to get into like multi-our
even when I was traveling to like Japan
or something. It's really hard to get
into like multi-hour flow on an
airplane. So I was like, I'm only going
to work on the stuff that isn't like
heavy design work, none of that. It's
just like bug fixes, right? Like just
cleaning stuff up. And so that was my
process. In 2021, Hashi Corp went
public. What is it like to go public?
Both in terms of preparing for it, how
did it feel? What changed after on the
prep side? I don't have the full answer
because I also stepped down from the
executive team about maybe 6 months
before we went public. So, I was part of
some of the planning and obviously I was
very aware that we were planning to go
public and but um like for example, I
wasn't part of the road show or any of
that. But yeah, you know, from my seat,
the the parts that I was part of, the
parts I had visibility on to, I mean,
it's it it takes over a year to to do
it. So, there's a lot of prep and and
there's some funny things that you do
like you do you start running like a
public company at least two quarters
before you're public.
>> Um, I don't remember what the drop
deadad date is, but there's a date where
you could just like cancel going public
and it's pretty close. Like, it's like
very close to to when you actually like
have that day. So you you kind of run
like a public company and to the point
where you do mock earnings calls like
you actually with a conference room
table. Your investors are the public
investors that aren't in the room. They
go somewhere else and they talk over the
speaker phone and ask you the types of
questions. Um your CFO or VP of finance
gives the full report of the quarter.
They try to frame the types of questions
you get. You run it and you try to
figure out like whether it's running
well enough, I guess. And that's sort of
what the prep feels like. And there's a
an obscene amount of secrecy because um
from a regulation standpoint, you can't
talk about any of this. And so I mean
you could look back at at even the dumb
stuff like hacker news comments like I
just went radio it's the clearest signal
that a company's going to go public cuz
I went radio silent on every topic
because everything became questionable.
I remember there was just a point cuz I
I there was a hacker news comment I gave
like eight months before it went public
and our general counsel like in the
middle of the night was like you have to
delete that after he talked to me I was
like I could see how that might affect
things but like I didn't realize the
matter and I ended up deleting it.
>> And is is this because you're not
supposed to give public information away
or something like that?
>> I don't remember the exact regulation to
be honest.
>> Yeah. But there there's some regulation
about like uh like not leaking
information or
>> not. It's not really I mean it is it's
all information but it's more about like
you can't influence the market uh in any
way. And so yeah and you can't make
promises because if you say oh we're
going to go public it might cause even
private funding to froth up and it's
it's a form of fraud. Um so yeah
basically like I just stopped talking
about everything. I don't know how
seriously other people take it, but I
took it to the point where I planned
this trip to New York to go public and I
invited my parents and I didn't tell my
parents why we were going to New York
and I just told them I want you to go to
New York. It's really really important.
It has to do with Ashp and they were
like, "Sure." And I said, "I can't tell
you about it." And they're said, "Sure."
And I told them maybe a month in
advance. We had a dog. We had to get our
dog sat by my aunt and I just told them
we're going on a family vacation up to
the point we left. They like I didn't
tell nobody except my parents knew
basically. Um none of my friends,
nothing. Um except the friends that
worked at the company. Um but yeah, it
it that's what it's like leading up to
it. Yeah, I I was at Uber when we went
public and and previously I read that uh
while before going public uh Hashi Corp
VMware made an offer earlier way
>> early that was like in super early
>> like two years into the company. We went
probably like 10 years in the company.
So yeah.
>> Yeah. So so like when when they tried to
to to buy you like what was it like did
you almost sell at some point? Was there
any point where where you were close to
potentially selling?
>> It felt close and and and
I got a lot of accounts afterwards that
it was very close. It came down to like
one vote on the VMware board was what I
heard about two years in the company. We
were only three employees me including
me and Armon. So we had one employee I
guess it was two founders and employees
three of us. We got approached by
VMware. Um you know I didn't know what
this would be like and and it is not
what it what it isn't is they don't show
up and say we would like to buy you.
>> No
>> no no
>> that would be too obvious. The way it
happens is you get an email from some
low-level business development person
that wants to just like talk vaguely.
And the vague talk is they're not
interested in buying you. The one of the
jobs of BD people at large companies is
just to have an understanding of the
ecosystem. So it's really just like
let's have an understanding. They might
have had an executive tell him or her to
go talk to this company. There might
already be an executive kind of poking
around, but yeah. So it kind of starts
out that way. It turns into would you
like to come by our offices and meet in
person? Um oh our VP of engineering
swung by let's talk to him nice to meet
you blah blah that uh then I think this
is like our actual timeline and then I
think uh there was a dinner where there
was three VMware executives at the
dinner. Um at that point we thought they
might be interested but it was still so
so much dancing. Oh, this is not even
this just this is months before there
was even an offer. It was still so
social like we drank, we talked about
our hobbies and interests and very not
about I mean very basic about like tech.
It's really more vibes. They go to
dinner. Uh and then it started to get
more serious. We spent more time impella
at the VMware offices where we started
talking about partnerships about how how
can VMware help our products more and it
starts about partnerships and then it
turns into like hypothetical if you had
the resources of VMware what what would
you do you know we're like six meetings
in at this point there's no no offer of
anything and then at a certain point um
honestly we were getting tired of it
because nothing was happening anyway
>> sounds like you're a startup and you're
going to all these meetings
>> oh I don't even live in the bear area.
So I was flying up all the time. It was
a waste of time. And um and to a lot of
founders that is the warning I give them
is M&A becomes a waste of time. So I
have another mer acquisition.
>> M acquisitions becomes a waste of time.
So I'll tell you another anecdote after
this but um ultimately we kind of
politely had the like okay let's like
or get off the pot kind of
conversation and they uh put an LOI in
front of us which is a uh letter of
intent. Letter of intent. The LOI was it
was one page. Um, you know, it was it's
basically like a non like semi-binding
promise that we're pursuing buying you.
Uh, no number on there. It's just like
kind of vain.
>> Still no number.
>> Yeah. Well, verbally, but they're not
they're not writing anything down.
They're not putting anything in email,
none of that. It's just verbal. And so
at that point, verbally,
>> we had gotten a drop of $20 million,
which
>> doesn't sound that much. Well, yeah, but
we're 23 years old. Oh, yeah. The three
of you 23 years old.
>> I'm 23 years old. Me and Arman together
own 70% of the company. Um
>> Okay. Yeah.
>> Yeah. It you know it it sounds
interesting to say the least. What I
tell people is you start you start
thinking about the things you will buy
is you start that's the that's the it's
a dangerous path. That's the that's what
happens. And we had advice from people
who said it's like phenomenally too low,
like wildly too low, so go ask much
higher. And we asked, I don't remember
anymore, but we asked for maybe like 40
or 50 or something. And they just said
yes. They said okay. And then, you know,
it's like way too low.
And uh and that was verbal, too. So,
there was nothing binding about that.
Yes. It was just it was it wasn't like
yes. It was more like okay, we'll work
on that, you know, but very positive.
>> Yeah.
like in this indirect. It's an indirect
>> indirect business sense in indirect.
Yes. And it turned into come meet the
CEO of VMware, you know, like clearly
they're interested cuz we're like
climbing still. Arman and I we kind of
started getting cold feet because it's
it's it's the way we described it is
it's a dreamkilling amount of money.
It's like you would take the money but
you're too small to be important to a
company like VMware. So they're gonna
just
>> because because even though it's like so
much money but
>> personally it's so much money
>> but but you know that at VMware level I
guess you see their revenue their all
that you realize that for them it's not
a big deal.
>> It's meaningless to them. Yeah. It's
meaningless.
>> It's crazy. That's that messes with your
mind you know.
>> Yeah. Yeah. So it becomes this thing
where it's like personally your life
could change but this thing that we both
were truly passionate about like the
thing I wanted to work on more than
anything else would end in a sense
because you know I would probably get
thrown into like working on ESX or
something you know and
>> you would get a manager of a VM where
not even the CEO
>> the executives make it sound like
they're going to do all this stuff with
your products but like that's just one
executive in a cog of corporate
machinery. So we started getting cold
feed being like if they're interested
maybe we're on to something. If we're on
to something we want to sell out early
and sell out in a way where our dream
dies. That's why it's like a dream
killer. Arman very maturely and he's two
years younger than me so he's 21 at this
time.
>> No, he's he sounds like the older one.
>> Yeah. Yeah. Yeah. Yeah. Yeah. He's very
mature. And Arman very maturely came up
with the uh I forgot where it comes from
but the the risk minimization uh not
risk um the um regret minimization
framework. He was like what personally
on your own go think and I'll do the
same and let's come up with a number
that if we walked in the next day and
they said we're killing everything.
You're going to go work on ESX for the
next four years cuz we had a we were
going to have a lock up no matter what.
you're gonna work next four years that
we would be like cool this was worth it
like what's the minimum no regret or
minimum regret we came back and I don't
remember exactly what our numbers were
but they were pretty close and we ended
up at 100 and we and so we're like it
felt so wrong like how could we possibly
ask for 100 but we're like we said this
is what we're going to do and we stuck
to it so we went back we asked for 100
and it wasn't a no
>> and they they wasn't a yes this one had
a lot more hesitance it was a lot more
like
>> uh we'll get back to you right like I
don't know but it wasn't a no
and basically they came back to us and
said this requires board approval so
we're convening a board meeting next
week like unplanned that's not when
their board like we're convening the
VMware board we're going to vote on this
and then we heard that uh the vote
didn't pass that was that
>> it's just crazy how
>> so such small things could you know like
influence if if that was an extra yes
>> who knows what your
one person you you might have, you know,
like it's hard to but you know in VMware
you might have been clogging away on
like this this project.
>> Yeah. Yeah. I mean we didn't build
terform yet. So Terraform
>> terform might have not probably never
would have existed. High confidence I
know who the vote was. I know why they
voted that way. Like I know a lot more
details but it's like I it's it worked
out obviously in my favor. But yeah.
>> So you've you've left Hashi Corp and and
you're you're independent. And one thing
about cool about being independent is
you're just very honest about stuff. And
there was this really interesting thread
where on on Twitter you wrote about you
said like ask me anything about the big
cloud providers because at Hashior Corp
you work with all of them. What was your
experience back then uh of you know like
Azure, AWS, Google Cloud like like your
kind of honest view of how they work
back then and possibly like how has your
views changed on them. The precursor to
that is while I was at hosp I I
obviously had to be very careful about
what I said about any of the cloud
fighters because we're partners with all
of them. We're partners and I didn't
want to insult anyone and and so I was
just very professional about all their
relationships and like
>> we like all of them like
>> yeah or just say nothing. You have
nothing nice to say don't say anything
at all. And then I left and I was still
I kept that up because it was too close.
I was I was still flying too close to
the sun as they say. and then enough
time passed where I was like ah like my
opinion doesn't really matter and um
yeah so my to answer your question um my
broad view of all of them was that AWS
was really arrogant annoyingly arrogant
was how I describe it
>> and and when you say arrogant like can
you help us understand like how you work
with them or what part of them or like
is just general I
>> I'll start disclaiming this though that
you know we worked with so many people
there that there for individuals and all
of them who are awesome and nice and
kind. And so I'm not trying to make like
individual judgments here. It was just
more of like how all of it came together
and how it felt as a as a whole. So by
arrogant I mean it always felt like they
were doing us a favor at every turn in
terms of partnerships, in terms of just
getting a meeting with them. It always
felt like you should be thankful that
we're spending time talking to you. And
not just that, but also like there was
always this subtle vibe of like we will
just spin up a product and kill your
company. You know, it it felt that no
one ever said that. Um well, it kind of
got to a point where it was sort of like
if we don't come to terms, we're going
to build this service. It it did kind of
come to that,
>> but you know, we did see that later on
with Elastic and
>> Oh, that had already happened.
>> Oh, it happened already.
>> Yeah. Just not with us, but with other
open search.
>> Yeah. and they always publicly spun it
as like, oh, it's so great and and
builds the builds the ecosystem larger
and we're doing it by the letter of the
license and you know, all has truth
elements to it, but it's still not a
nice thing.
>> No, I I I think like um I I I don't
think people paying attention to open
source appreciated what Amazon did with
with it. It it really hurt Elastic's
business and it showed how open source
can be weaponized against a company that
spends, you know, their blood, sweat,
and tears. And I guess you know Hashorp
you had the same thing right cuz cuz you
you were publishing permissive but I
mean open source needs to be permissive
so
>> it was MIT or MBL license. Yeah.
>> So like Amazon could have spun up any
anything they wanted.
>> Yeah.
>> There was like a 2-year period where I I
think for the entire two years the
entire leadership team was terrified
that at any moment there would be like a
vault service or something would pop up.
Um and so yeah that's that's sort of my
characterization of AWS. It really took
like for example teeth ringing to get
them to help with the AWS Terraform
provider. Um we had I don't remember the
exact number but we had something like
five full-time engineers employed
working on only the AWS provider for
Terraform which you know maths out full
benefits and everything to like a
million dollars a year.
>> Um and all of that was pure open source
pure integration with a commercial
entity and they were not helping us all.
and and they were the last of any of the
cloud providers to to provide any sort
of help there. And it it came down to
some drama where we went to a meeting
and basically said that we're going to
publicly say that the AWS provider is
deprecated and we're done. Like the
community could pick it up or whatever,
but we're not we're going to
>> Yeah. Cuz you didn't get any help from
them.
>> Yeah. And it's taking up too much work
and there's too many bugs and you're
shipping. Honestly, AWS is shipping
features too fast and like it's just
like not worth it. And that freaked them
out and finally they started helping.
You know, they might recount their side
of things differently, but that's pretty
much it felt like no movement for years
and we said that and movement started
happening really fast. So yeah, there
was that. Um Microsoft I would I have
the most positive view on Microsoft.
They had a really hairy technical
product is how I describe it. It was
very difficult to use
>> Azure
>> Azure and a lot of nouns like like
principles and I I didn't I still to
this day and I've integrated with the
service don't fully understand the AM
hierarchy of Azure
um I just kind of bolted it and got it
working with a team and and that was
that but so technically kind of h but
from the business side competent um
professionals and team players that's
like how I describe it. They we we went
into every meeting with them and a lot
of our meetings the first question was
how do we both win? That was like the
first question and and yeah, very
pleasant. Awesome. They were the first
people to jump on board uh supporting
Terraform. Sure, that's some kind of
bias, but like they were consistent
throughout the years. So positive on
Microsoft. Um and Google Cloud, you
know, my my or yeah, Google Cloud in
general, it was always like the best
technology, the most incredible
technology and architectural thinking.
And I swear none of them, it felt like
none of them cared or thought about the
business at all. It was like every
partnership meeting we'd spend hours
talking about the coolest edge cases and
scalability and how this is going to
work and like I I think the the best
public example that you could just see
in history was they were the only
company that when they partnered with us
to write the provider they spent a lot
of time building this very good I think
they called it magic something they they
they fully automated the whole thing. So
when they shipped the new Google Cloud
thing, it had a Terraform fighter
resource right away and not just like it
didn't feel automated. It felt very
ergonomic and like it was good. It was
really good. And so that they had that.
But whenever we would get into how do we
do co-ail? How do we like attribute your
sales engineers quota to selling like
infrastructure that's spun up by
Terraform? Like how do like how do we do
this? to like the business side of
things.
>> Crickets like impossible to get anyone.
Not just impossible, it was like even if
you got someone, they would say
something for 20 minutes and be like,
"Okay, cool. We have two more hours.
Let's figure this other thing out." And
yeah, it was it that's what it felt
like. And the other disclaimer I give is
all this knowledge was circa, I don't
know, 2019,
something like that. So maybe in the
past seven years things have
dramatically changed, but that's what it
felt like.
>> Yeah. going to to open source. You're
you're actively involved in in open
source and open source today and it
seems open source is changing a lot
especially with with with AI and and you
know you're seeing stuff at ghosty like
can can you tell us like how you know
open source has has changed with ghosty
with AI contributions and and what what
are you seeing with with open source
maintainers? Seems like there's a bit of
a you know like drama or worrying stuff
happening. Well, I would say more
broadly the issue facing open source
today um is I mean there's there's
multiple but the one that I feel is most
prevalent across industries right now is
AI contributions and the specifically
the ratio the signal to noise ratio
being incredibly low. In other words,
just being super noisy with low quality
contributions. It's just stressing the
system quite considerably. And yeah.
>> And and so after you left left Hashi
Corp, you you started Ghosty. Uh how
many years ago was that? Was that like
two years or so?
>> Well, I left Hashorb over two years ago
or a little over two years ago. I had
like
>> poked around with prototypes of Ghosty
like maybe 3 years ago. But after I left
Hashorb, I started just like kind of
working on it like 20 hours like much
more um just because it was the thing
that I had.
>> What drew you to go see? What was your
kind of vision? Why did you start
working on it? It's a it's a it's a
better terminal, right?
>> It's a terminal. Better is subjective.
>> Well, I I installed it cuz I I I like it
better. But yes, a a terminal and
opinionated terminal, right?
>> Opinionated. um very modern in terms of
like supporting as many of the newer
specs as possible that enable
functionality like displaying images or
um you know clicking on your prompt to
move the cursor and but like dozens more
uh examples like that. The original
thing that drew me to it is is the exact
opposite of good advice that people
usually give to people which is that you
find the problem and you build a
solution and what I did and you pick the
best technology that then solve that.
What I did was I found a set of
technologies and I was like what could I
build with these technologies? I went
the opposite direction and I had spent
over 10 years 12 years at Hash Corp
incorporated and 3 years prior to that
doing infrastructure open source. So 15
years in total just thinking almost all
the time about infrastructure and cloud
services and things like that. And so I
had felt that I was rusty. I had sort of
like my skills have had weakened on
desktop software systems programming to
a certain extent because I was so
constrained by networking challenges
distributed systems. So like low-level
systems programming had had had
atrophied. Um I had never really worked
with GPUs and GPUs I guess crypto was
happening but I kind of ignored that
whole trend. Um uh but this is preAI so
um but GPUs were obviously in use and I
I just felt like I had no idea how they
worked so I wanted to go to desktop. So,
I picked all these like different
technologies and I said, "Okay, Zigg."
Cuz it looked cool to me. I just wanted
to try it.
>> Can for those of us I'm I'm not into
Zigg. I heard good things about it. Can
you explain why Zigg is so interesting,
innovative, and why does it grab so many
so many devs attention? I don't know why
it grabs other people's attention, but
for me, it was it it just felt like the
the best better C that I saw out there.
And I am someone that's coming from the
position where I actually enjoyed
writing C. So, a better C sounds great
to me. To me, it's it's not very
annoying in terms of like if I want to
blow my own foot off, please let me blow
my own foot off. You know, a bunch of
qualities came together where I thought
on the surface it looked cool, but it's
very hard to judge a programming
language on the surface. So, I wanted to
build something with it. And so, yeah, I
I picked the GPUs, desktop software,
what could I build? Uh for for all my
time at Hashorp, um I built CLIs. And I
was like, well, I live in a terminal.
Like, what does it take? I don't I live
in a terminal and yet I understand very
little about a terminal. So why don't I
just like build a toy project that's a
terminal. That's how it started. And and
as as with a lot of stuff I find that
once you dig beneath the layer of taking
something for granted, you realize that
everything is way more nuanced and
complicated than you imagined it to be.
And terminals were the same way. is once
I dug beneath the surface, I realized
how much they were doing, how brittle
some things were, how much better
certain things could be, and I I I got
sucked into being like, I want to do
this better. So,
>> okay, for like someone who's a a dev,
you know, like I I use terminals as
well. I'm going to ask the stupidest
question. How hard could it be? What
does a terminal actually do? And then
can you maybe tell us like how Ghosty is
is structured or like what what are the
things that it it needs to do? just give
a little empathy of like actually all
the work that you're doing.
>> Yeah. Yeah. Yeah. I I actually get that
a lot. I I get that question a lot. So,
it's definitely not a dumb question.
It's really like it gets asked less now,
but a lot of people are like, "I thought
they were done." Is usually the most
feedback I get. Like, what is there to
do in a terminal? Um so, at a basic
level, they don't do a lot. The problem
is that the the functionality's grown
significantly of what terminal
developers want to do. But let me let me
just give um what they do. It's kind of
like an application development
platform, right? It's like a it's it's
not an operating system. You're not
dealing with like hardware level
problems, but it is like a application
sandbox on top of that. And that other
applications run within it and need to
render text. They need to render colors
and images and widgets and mouse events
and all this stuff. Like you're the best
description is it's like it's like a
browser but for text content. And so all
of the complexities that a browser has,
a terminal has similar ones, a smaller
scale but similar ones. And if you try
to extend what a terminal is capable of,
then it it gets, you know, you start
bringing in more and more problems. Like
as soon as you brought images into a
terminal, you introduce like a whole new
ecosystem of problems. But the
tongue-in-cheek answer I like to give to
Ghosty's complexity is that it's 30% of
terminal and 70% a font renderer.
Uh and uh yeah, that's what it feels
like. It's really like a problem of uh
you know that terminal screen you see
whether it's GPU or CPU rendered that
terminal screen you see is like you're
drawing on a canvas so you are building
a renderer for text uh in there
everything kind of bubbles from there so
from a rough architecture standpoint of
Ghosty I like breaking it down in terms
of threads because GOI is multi-threaded
not all most terminals are not um but
I'm not saying that as a positive point
it's just a good way to describe the
architecture we have a central UI thread
which just draws the windows and stuff
that's pretty standard for desktops
software and then we have an IO thread
which runs the actual shell that you're
seeing. So any bytes that we send or it
sends back to us, it's processed by the
IO thread and then we have a renderer
thread which is actually drawing it. So
it's it's the best way to think of it as
is it's on a VSYNC clock through 30 60
120 frames per second is it's just
sampling what the terminal state is and
then drawing it. And the renderer itself
uses a font subsystem on the same
thread. But we have to take the fact
that this grid has this character at
these sets of characters and map them
into fonts and do that all on our own. A
lot of people think, oh, doesn't the
operating system solve that for you? But
they don't unless you're much higher
level like you know you can't just draw
easily, you know, monospace text in that
way. You have to really put pieces
together. That's the the big picture. Um
it's quite simple at that level. and
then just you know extend all the
functionality the terminals have into
that.
>> So you're kind of like building a like a
2D graphics engine a little bit that has
like very focused on fonts.
>> Yeah. Yeah. It's it's a from a renderer
side it's very simple. The renderer is
actually not that complicated and and I
won't over complicate the hardest part
is actually maintaining the terminal
state. So the way terminals work is
they're a they're a grid of monospace
cells. So you'll have like 80 by 2480
columns 24 rows and there's commands
that the program could send to move the
cursor or say if I like to say think of
it like a paintbrush that could say make
the paintbrush red and bold and
everything after that is red and bold
and now change it and you're just
maintaining the state and drawing around
and then there's all the scroll back
right which people are used to in
terminals going back and that's where
the challenge is is doing that in a fast
performant way and that's what I try to
do with GOI. And I I show this there's
so many benchmarks we run, but one of
the most obvious ones that shows the
speed, which also gets a lot of
criticism um is just catting a large
reading a large file. If you just like
dump a bunch of text, how fast can it
get through it? And you'll see a stark
difference between modern terminals. I'm
not just going to say ghosty here, like
if you if you take ghosty kitty, um
elacrity, um any of these newer
terminals, they're all going to do great
compared to terminal app on Mac OS. um
or traditional like Linux terminals. The
criticism is why does that matter? And
you know the the easy answer is when you
um when you accidentally cut a file like
a lot of people will force close. Um the
creator of Reddus posted a great comment
for me, a great uh comment on hackernews
about why he loves Ghosty, which is that
he previously used to tail production
Reddus logs and you know it just spews
logs out and he used to have to send
them to an intermediary file and then
read them out later so he could render
it.
>> So he could render it and actually work
with it. And he doesn't have to do that
anymore because Ghost is fast enough
that he could just let it dump while
he's going through it, parsing it, like
like mentally parsing it, things like
that. and that just saves him time and
um yeah
>> there's something to be said at some
point we should probably talk more about
the fact that a lot of software these
days does not care about performance and
I think it's refreshing to actually have
examples and I I hope we will at some
point maybe get back to it you know
we'll talk about AI but that might not
help but there's a level of
craftsmanship right just like not
wasting resources or being efficient or
I I think we all like I I see in in my
day-to-day life like we have more
powerful resources laptop tops, phones,
and they're not getting any faster and
it's just frustrating at times.
>> It's kind of like the love of the game.
I mean, a lot of lot of ghosty is just
the love of the game. Um like like I
like to say like our renderer cuz cuz
like like I disclaimed before like it's
not complicated. I'm not I'm not ever
going to say that ghosty is like a a 2D
game because a 2D game from a rendering
standpoint is much more complicated. Um,
but I do care a lot about the render and
we got our renderer down to for a full
screen on on my Mac um set of grids.
Each frame updates in roughly I don't
know it's like it's something like 9
microsconds or something. Um, that
doesn't include the draw time. That's
just like taking the state and
submitting work to the GPU. It's about 9
micro and the GPU takes some time. 120
hertz 120 frames per second frame is
8,333
microsconds. So if you have nine, you
know, again, we don't have the number of
how long the GPU takes, but it's super
per it doesn't take much time at all.
>> You're leaving a lot of options and work
for
>> what I'm saying is like we could have
made it 2,000 microsconds and it
wouldn't have mattered. It like you
would you would still get that
performance, but that's not fun. Like I
want to make it sub 10.
>> I I I I like the fun.
>> Yeah. So, we spent a lot of time just
like it was a big I I blogged about it.
It was this thing where we got it down
from it used to be about 800 microsconds
and got it down to like nine and uh I
thought that was awesome. Even though
for end users it doesn't make a
difference.
>> But as you say the the craft and the
level of the game. So when you started
out building go that that was around the
time where I think Chad GPD was out.
There were some tools. How did your tool
set change in terms of how you're
developing day-to-day?
>> There's two sides to that. So one AI
gave a huge boost to terminals which is
a funny thing like a like oh how so the
number because of cloud code and all
these things the amount of time spent in
a terminal has gone up which if you told
me in 2023 terminal usage would go up I
would say no it's not going to go up um
I I had no disillusions that I was going
to like save terminals and I didn't
right like AI came out and came out all
these CLI tools and and even when you're
seeing like codeex apps and claude apps
like is leaving the terminal. They're
still executing so many things in a
pseudo terminal. The number of terminals
out there is is massively larger than
there was in 2023, which is hilarious.
>> Oh wow.
>> Yeah.
>> So random.
>> Super random. And so that's part of why
uh one of the things I'm doing with
Ghosty is extracting. It's actually
extracted already what I've called lib
ghosty which is everyone reinvents this
very small surface area of a terminal
and because they do it breaks like all
sorts of things break like if you run a
docker build or push to a platform like
Heroku and you do enough weird things in
the terminal that aren't actually that
weird just like draw progress bar it
renders it like chaos
>> all over the place
>> all over the place. Yeah. Um and it's
just because they've poorly implemented
a tiny subset of a terminal because
they're more complicated than people
think. And so libgo is this minimal zero
dependency library that people can embed
terminals anywhere.
>> Oh, cool. And yeah, yeah, MIT license
and just it's really like I'm tired of
seeing broken terminals everywhere, so
please use this. Um, so okay, that's the
one angle. Really funny. But the other
angle is actually AI usage. It's hard to
say. I'm a I'm a big fan, but you know,
within the right categories of things.
Like I think that it's a revolutionary
tool and I get a lot of joy using it. It
Yeah, I use it every day. Um, I use
tools like Cloud Code and AMP and Codeex
and and the chat tools like every day
for some aspect of my life. And it's
really allowed me to choose
what I want to actually think about,
right? I think that's the most important
thing is that I always felt limited in
terms of, oh, I'm going to have to spend
the next two hours, I don't know, doing
this boilerplate annoying stuff and that
I don't want to learn about. But now I
don't have to learn about it, which is
yeah, I'm not I'm not like getting skill
formation in that category, but I could
now spend those two hours doing
something else and and that's the best
to me.
>> In your workflow, do you just use a
single agent? Do you use multiple
agents? Have you have have you
experimented with them?
>> I've tried a bit of everything. I would
say my standard workflow. What I try to
do is I try I endeavor to always have an
agent doing something at all times. Uh
maybe not when I sleep. I don't go that
far. A lot of people do go that far. I
don't go that far. Um, but while I'm
working, I basically say I want an agent
if I'm coding, I want an agent planning.
If I if if they're coding, I want to be
reviewing or, you know, that there
should always be an agent doing
something.
>> So, you have it in a separate tab.
>> Yeah. Separate tab. And and sometimes
it's multiple. I don't there's a lot of
work that I do around cleaning up what
agents do. And I don't run like gas
townesque like things. And so I'm the
the mayor, so to speak. And so I don't
want to run too many. I don't find it
that fun to clean their stuff up. But
periodically I'll I'll run two um in
competition with each other because I
it's a it's a harder task and I I don't
have a high confidence that they're
going to just like crush it. So I'll
just run Claude versus Codeex or
something like that. Or I'll have one
coding, I'll have one doing like some
sort of research task. Um I absolutely
love them for research. That's awesome.
Um, and then I'll be doing something
else, but no more than two, I would say.
Yeah.
>> The code that they generate, do you
always review it or have you kind of got
a bit more loose? And, you know, some
people swear on having a closing the
loop, having validation for it, or are
you like still like, all right, I I want
to see the exact code and I'll review if
it's correct and what I expected.
>> Uh, matters what I'm working on. And
>> if it's ghosty, I'm reviewing everything
that's going into it. Um, if it's like I
set up a personal wedding website from
one of my family members, I don't care
at all what the code looks like. Did it
render right in there three browsers
that I tried? Yes. Did it render right
like on my phone? Yes. Don't care what
the code like. Doesn't make any
networks. No. Has no secrets access. I
don't care. Like ship it. It's only
going to be online for 2 months. So,
ship it.
>> Yeah. And then how did the AI policy at
Ghostly change? I remember that in maybe
a year ago or so, you asked for
disclosures if someone is using it. And
just very recently you kind of crammed
cracked down and said like all right no
more.
>> Yeah we're going to change again too.
Well not gonna change we're gonna
iterate. Um so yeah a year ago started
asking for disclosure and people you
know the the the very fair question
there is what does it matter how the
code is produced? And the reason to me
it always mattered was because it
dictates how much effort I go into
fixing it. Because if if you produce the
code with AI and you did it really
quickly, then I'm not going to spend
hours fixing up your code. You you you
spend your time fixing.
>> Yeah. Cuz cuz you know that that person
did put much time and not much human
time. You're kind trying to mirror it,
right?
>> It's ever for effort. If you put in
hours, I'm going to put in hours back
and I'm going to help you. But if you
put in a few minutes and never read
anything and throw it over the wall,
then I should be able to read it in a
few minutes, say no thank you, and close
it. It's it's fair and I need to better
understand what that is. And you know
it's not about bad code because open
source has always gotten bad code
contributions. But the difference before
is usually those bad code contributions
came from people that were genuinely
trying their best and put in a lot of
effort just to get to that bad code
point. And so I I people behave
differently. I would always try to
reciprocate by being like this is
someone very junior or this is someone
just new to the project. And I would try
to educate them, be like, "Okay, we
should do this better and and give these
careful reviews, but if it's bad code
that there was low effort, I'm not going
to give a careful review." So again,
like I wanted to know these things. And
and the disclosure worked decently well.
The issue wasn't the disclosure. The
issue was that the quantity of
lowquality AI PRs that we were getting
reached a a point where it was too high.
Like do you know why that might have
happened? more people instructed agents
to contribute a PR to fix an issue they
had like do you have theories or
actually like like seen evidence of why
this happened? I have theories and I've
seen some evidence, but yeah, I mean, I
think obviously there's the rise of just
AI usage in general, but the real trend,
a step change that I saw at a certain
point, and I don't know when it happened
cuz I don't use agents in this way, but
at a certain point they started opening
PRs. You know, before it was like you
generate code and maybe they commit and
stuff, but you would still like push it
to a branch and then open the pull
request. At a certain point, they
started opening PRs. And there is a dead
giveaway at AI because at least to this
day to at the point we're recording
this, the way Claude opens a PR is it
opens a draft with no body and then it
edits a body later and then reopens it
for review,
>> which is not how a human would do it.
>> Oh, like one human a year would do that.
And now it's happening three times a
day. And so even if they're not
disclosing AI or they're hiding it, it's
like, oh, and it happens at a speed
that's unrealistic. It opened the body
came in less than a minute later and it
opened less than a minute later. Like,
>> yeah,
>> pure AI. I I just tweeted about this a
couple days ago, which is just like I I
wish that these agentic tools would put
a pause on opening PRs for a second. Um
because I think that's the point where
it's really causing a lot of friction.
>> How did you change the policy? Are are
you considering closing down PRs? You
mentioned that recently that you've
you've the the thought crossed your
mind.
>> I would say I was crashing out in that
moment.
Uh but I but kind of um so we shipped
this policy update where PRs written by
AI are no longer allowed anymore unless
they're associated with an accepted
feature request. So you can't just drive
by and be like I did this thing that
I've never talked to you about. Here you
go. It we and we we get about two or
three of those a day. And so we just
close this thing. I don't even I
literally don't even read the content. I
could see it's AI. Uh I can see there's
no fixes issue number. I just close it.
No idea if the code is good. Don't care.
It's just policy. Don't have time for
that. That's pretty much where we landed
on currently.
And uh we're recording this in the
middle of another transition, which I
already have the PR open. Um where we're
going to switch to a explicit vouching
system for the community. So you're no
longer able to open a PR at all. AI or
not, don't care anymore. Which is I
think the people who criticize where it
came from doesn't matter. It doesn't
matter anymore. Now, all that matters is
that another community member has
vouched for you. Um, and if they vouched
for you, you're added to a list where
forever or indefinitely you could open a
PR. If you behave badly, then you, the
person who invited you, and the entire
tree of people they ever invited are
blocked forever from the repo.
>> This reminds you a little bit of, you
know, the social lobsters.
>> Lobsters. Yes. That's what it's based
off of. So, the idea is that you're
putting your own reputation on the line
by vouching for somebody else. I'm a
reasonable person. If if this happens
and I or one of our maintainers or
community made a mistake, if you just
like hop into Discord or email and and
seem like a reasonable apologetic
person, like I'm not going to spend a
lot of time like there's not going to be
like a I don't know a mock like court
type session. I'm just going to be like,
"Okay, I'll give you no chance." So,
yeah, we're we're sort of moving that
system. I think one thing that's a
little bit different is um I should say
that this is one inspired by lobsters
but specifically in the AI space is
inspired by this project called PI. Um
they do this uh well they do they
>> call it is built on pi it's a
self-improving
>> like uh build your own agent toolkit. Um
so you know kind of ironically it's it's
an AI tool but they care a lot about
code quality and anti-slop and things
like that. So they have a similar
mechanism, a little bit less of the tree
and some other but a similar you can't
open a PR unless you're vouched for. And
the other difference here that we're
going with is in addition to vouching
where you could positively mark someone,
you could actually denounce users. So if
there's a bad actor, you could actually
ban them. Not not just like you can't
even attempt to contribute again. And um
that's just a yeah, we had one yesterday
where someone opened PR, we closed it
because it violated it they had no
associated issue uh and it was AI and
then they just reopened it like not the
same one. They resubmitted a new branch
and reopened it like less than 10
minutes later. I was like, "Oh my gosh."
So um stuff like that is just it's the
problem is it's just wasting time.
>> It feels like most of open source will
have to change because of AI, right?
like it's you you probably know more
more maintainers but I I hear this your
story is not the only one you know like
the project closed down uh PRs GitHub is
I think just shipping a feature that
projects can automatically close or
reject PRs.
>> Yeah, I think open source will have to
change in a lot of ways. I mean I think
I for who wrote this but you know one of
the logical extremes is if agents are so
good you don't need open source anymore
because you could just build it right
>> theoretically. Yes,
>> that's that's the extreme. I don't
describe that extreme, but that's one of
the extremes. The issue is there used to
just be this natural back pressure in
terms of effort required to submit a
change and that was enough
>> and now that that has been eliminated by
AI. It's I like the wording that PI uses
which is that AI makes it trivial to
create plausible looking but incorrect
and lowquality contributions. And that's
the that's the fundamental issue. You
know, open source to a certain extent
has always been a system of reputation,
right? like you you earn some trust and
you get more access that you know and
that's how it's supposed to work. Um but
yeah, it's been that reputation system
has been taken advantage of in a certain
sense with with AI um or the default
allow PRs has you know has been taken
advantage of. And so I think like like
this vouching system that that we're
proposing for my project I think it's
like very true to what open source is
which is that open source has always
been a system of trust. Before we've had
a default trust and now it's just a
default deny and you must get trust by
somebody.
>> Do you think we might see a lot more
forking happening though?
>> I hope so. I hope so
>> because until now forking used to be a
you know like like a fork off a little
bit because it was a lot of effort that
it wasn't to to keep up like it it never
seemed viable to fork a proper project.
Right.
>> Yeah. And I and okay I I am separate
from AI and everything I have always
been a huge proponent uh or I guess in
the past few years I've been a huge
public proponent of there should be a
lot more forks like a lot more forks
because open source I think one of the
reasons maintainers have been taken
advantage of to some extent is that
contributors have some sort of
entitlement you know whether it's toxic
entitlement or not but there's some sort
of entitlement which is I've made a
valuable change so you should and it's
clean and it works great so you should
accept it but you really don't have to
like you absolutely don't have to. And
then I've seen this time and time again
where you have a high quality PR, like
perfect PR, but you say no and there's
anger in the community.
>> But the thing is I I've said this since
10 years ago in Hash Base, hitting the
merge button is the easiest step.
Getting getting to and hitting the merge
button is the easiest step. Like
undergraduates should be able to do
that. It's after that it's the years of
maintaining whatever you just merged
within the context of your your road
map. um the bugs um customer needs all
that stuff like that's the hard part
like you're signing up to keeping this
forever. It's very hard to remove
features. So or anything, remove
anything. So the core privilege you get
with open source like OSI open source is
forking and you should take that's
that's the right you got you should fork
it and maintain your own software.
>> Yeah. One interesting impact of AI as
someone tweeted about how there's a
rumor that big tech is looking into
rearchitecting their monor repos because
of agentic tooling AI tooling just a lot
more code being turned out. What's
actually what's actually happening?
What's the problem with git? The problem
with git, I mean there I think there's a
lot of problems with git, but uh the
monor repo problem with git is that git
is is relatively bad at very large
repositories because you you pretty much
have to clone the entire repository.
There's there's some extensions to like
fix that, but like official mainline git
can't really do that, right? And so for
very large uh changes the uh very large
repositories um it's sort of annoying to
maintain. And then if you have a lot of
churn in it, it's very hard to get
changes into whatever your trunk is,
your main your master branch, right? you
concept rebase merge Q solves that to a
certain extent. I think merge cues works
for humans at a certain scale, but the
merge cues could get quite deep. But
then if you sort of 10x that, like
conservatively, I think 10x that. And
then if you buy into like hype cycles
and you 100 or thousandx that, I think
it gets completely untenable in terms of
how are you ever getting any semblance
of cohesiveness onto the main branch um
quickly. And and so yeah, I I think
there's a confluence of problems there,
which is which is the merge cube
problem, the dispace problem, the like
branching review type problem. Oh, I I
also treated the other time where like
git has this you branch and you push up
your branches, but the branches are only
the positive. Like when you when you
close a PR and you you don't accept it,
like you pretty much are the branch.
GitHub, you could reaccess closed PRs,
but you a lot of people don't even get
to the PR stage. They experiment.
They're like, "Oh, this isn't the right
way." And they never push the branch.
And and that's like relatively important
information. Relatively important. It's
not as important as the positive, but
there like I I think there should be a
lot more branches and get a lot more
information that we just never throw
away. Like we're at to me we're sort of
at the like Gmail moment for email for
version control where like you used to
really have to like curate delete all
this email and then Gmail came out gave
a gig away for free to everybody who
never had to think about
>> their tagline or something was like
never deleted email I remember seeing
that in some s of marketing was like
just archive it right never delete it
and that's where I feel like where we
should be at with code which is like
just this huge repos lot of context we
need better tooling in order to find
relevant context in that git repo um or
version controlled repo. I would say
that the real you ask for like real
examples. I do advise um a company
that's currently stealth but working in
this space and they're the real examples
is is is driven by the highly adventic
companies. The companies that are like
going really allin and drinking the
Kool-Aid and their struggling in terms
of the amount of churn that these agents
is causing is so much greater than
humans. And it's not a AI review problem
or anything. It's really just like a
release problem like managing the merge
cues, humans getting access to the right
set of data in the repository and things
like that. So,
>> so are problems performance problems
mainly with w with with git or or just
like even the workflow of
>> Yeah. Yeah. All of it. Performance for
sure, but workflow. Yeah. I mean like
every time you pull you're you can't you
can't push because every time you pull
there's another chain like every time
you push it's
>> there. Yeah. There there's a lot of
parallel work happening as well. Do you
think Git will be around with with with
the judge in a few years?
>> Who knows? But what's interesting is
this is the first time in like 12 to 15
years that anyone is even asking that
question without laughing.
>> We're not laughing,
>> right? Like like if you if 5 years ago
you said, "Will Git be around in 5
years?" You'd be like, "Are you of
course it'll be around?" Like that's
crazy to think, right? But now people
could ask that question. And of course
some people will laugh, but like there
are people that critically think that
Git might not be around in 5 years.
Well, I think you do want to save the
prompt history because often reading the
prompt is actually if it's a bunch of
code generated, the pull request is
meaningless.
>> Changes will h like Git and GitHub
forges in their current form do not work
with agentic infrastructure today and
it's nent today. So yeah, where change
will happen and I'm not exactly sure and
it's not something I'm trying to change
myself but it it you know I'm on the
receiving end in terms of a agent user
and a maintainer where I'm like this
isn't working. What other engineering
practices that you know we have been
relatively stable for like 10 20 or even
more years you think have to change or
or are looking to change thinking things
like CI/CD testing code review other you
know ways of
>> yeah you know AMP has this saying which
is is is
it's kind of clickbaity but it's so true
everything is changing and this is this
is the first time really where it feels
like it is the first time in my you know
short relatively short to other people,
but still a 20 year professional career
that so much is on the table for change
at one time. And I'm an optimist, so
it's really exciting to me. Um I it's a
lot of fun, but it's we've never seen so
much editor mobility. Editors used to be
one of those things that once someone
picks an editor, it's very hard to get
them off that editor. They're like
stuck. The level of editor mobility in
the past few years between like VS Code
and cursor and and just jumping around
is is unreal. So there's a bunch of
mobility there in terms of uh I mean
cursor itself is a great example of a
company that reached an insane valuation
that you could never have gotten preai
on an editor product. So editor forges
um CI/CD for sure and I I think that
testing in general because to make an
agent better it needs to be able to
validate its work. And so tests go from
even the best test case scenarios don't
have like I mean the best I guess have
full coverage but that that's a very
extreme the the very good test case
scenarios just test like one of the edge
cases and one of the happy cases and you
know bad case and they they just kind of
go through and if it passes it's
probably good paired with a human who's
thought about the problem. But AI is
more goal oriented in terms of I want
this feature to work this way that if it
doesn't see a spec somewhere or a test
somewhere that other things should work
in a different way. It'll just break it
on on its path to its own goal. And so
uh I've heard this called a lot of
things. I mean the one I like the most
is like kind of like harness engineering
which is like
>> harness engineering.
>> Yeah. That's like and I've been one of
my like goals for this calendar year has
been to spend more time doing that,
which is that anytime you see AI do a
bad thing, try to build tooling that it
could have called out to to have
prevented that bad thing or course
corrected that bad thing. And so sort of
like moving from the product to working
on the harness for the product or
product development. And so, yeah, there
there's there's a lot of that where I
think testing has to change to be far
more expansive, but CI/CD is not set up
just resource performance- wise to be
able to do stuff like that. Um, so yeah,
I'm I'm not sure how it changes, but
that's going to change, too. So,
everything is on the table. It's really
interesting.
>> Yeah. And a lot of tools to be built.
One other thing, observability.
>> Yeah. And then and I guess on that same
topic I mean of the volume and and scale
and observability it's also like the
sandbox like I didn't think even being
in infrastructure and being heavily into
infrastructure you know containers blew
up the amount of like minimal compute
units we had like floating around
everywhere. I didn't think that was
going to go up. I mean it'd go up like
predictably up but I didn't think it was
going to like slope change up. And it
has like slope changed up already just
due to the sandbox environments that
agents need. And yeah, I mean that's
super interesting to me because that
stresses a whole lot of new systems. I I
think you know the things that I worked
on like all the products I worked on but
also things in the ecosystem like Docker
but like Kubernetes they're going to be
stressed significantly because they're
engineered for some level of scale but
this is a different type of particularly
non-production workload scale that you
have to support. So, um, yeah, it's it's
fun fun proms.
>> Going back to hiring, you you've hired a
lot of engineers and you previously
talked about something really
interesting. This was, I think, in the
context of maybe Hashi Corp, how some of
the best engineers you've hired had
really boring backgrounds. Can you talk
about that? Like who who were the best
engineers you hired and like how how
>> that's a better way to frame it. Yeah, I
I I stand by this. Most the best
engineers I can remember from my time at
Hoskar, but also just in every job that
I've had are notoriously private. And
not because they want to be private,
because they just don't care to be
public, I guess would be the better way
to put it. I don't want to like
carefully describe anyone without giving
them away, but you know, they're just
they don't have social media profiles
very often. They honestly are 9 to5
engineers. They go back and they don't
code at night. they just spend time with
their family, but because they don't do
anything else during their working time,
they're like locked in and and they're
really good. And it's not about putting
the hours, it's also just skill-wise,
um, super strong. Um, so yeah, I always
found like when I when I was reviewing
resumes and stuff, when you find the
person that has a resume where they like
they don't have any GitHub, even a
GitHub account, like some people are
like, "Oh, you have to public
contributions to to stand out." Like
that is a way to stand out. But also, if
you have zero public contributions and
you've just worked at companies that
also have never heard of before, it kind
of is interesting to me, which is like,
okay, you you might know something like
deep. Um, so yeah, I I think that, you
know, the problem is, and I the funny
the ironic thing is I spend a lot of
time on social media and these engineers
are better than me. Um, but the the
funny thing is every moment you spend on
social media, time is zero sum. So any
every any moment you spend on social
media is taking away from something else
and and the issue is it's not one for
one because as every engineer knows the
time it takes to really get your mind
into flow to get going with something is
it varies but it takes time and so when
you context switch to social media if
you if something's compiling and you tab
over and you spend time you you've given
something up in terms of thinking I I
think one of the best things I do spend
a lot of time on social media but maybe
unhealthy the um amount of time on
social media, but also an unhealthy
amount of time um at night. I I don't
have insomnia, but it takes me a long
time to fall asleep. And and it's
because I just sit there in the dark,
and I love Some people do this in the
shower, but it's not long enough for me.
I love to just sit in bed, lights off,
my wife's sleeping, and I just think
through like I'm writing code in my
head. I'm thinking through products. I'm
thinking through website copy. I'm
thinking through I'm running CLI in my
head of how it's going to feel. And
sometimes last night I I went to bed at
9:30 because I'm a I'm a dad so I go to
bed early
>> and you have to wake up and you don't
know when you have to wake up.
>> Yeah. Yeah. And I didn't even feel like
I was up that long and I was like I got
to go to the bathroom. I should go I
should really actually like go to sleep
and I looked and it was 12:30 and all I
was thinking about was uh it's so dumb
but all I was thinking about was this
vouching system of how vouching might
work and might not work. And I've always
had this thing where I'm willing to I
like competing. I I think competition's
fun. Um but I always feel fair game to
compete with anyone in product building
space because I think I'll spend more
time thinking about it than they will. I
think people turn it off and I I don't I
try not to turn it off. So um yeah, I
mean I think the point of all that is
the best engineers are the ones that
context switch the least. Probably
>> having used AI AI agents. Do you think
this might change because you know like
these agents can go on and think or or
or do work for you? Like how would you
hire in this in in this new world where
where using AI is kind of a given? Most
devs will prompt uh and fewer and fewer
right even though best devs clearly know
how to write code as well.
>> Um I would definitely require competency
with AI tools. You don't need to use
them for everything. that's not
important to me. But it's an important
tool to understand the edges of like
it's like any other tool where sometimes
it's useful and sometimes not useful,
but if you ignore it completely, you're
going to do something suboptimal in a
time. I mean, the best example to me is
pro proof of concepts. Like constantly
in real product organizations, you have
an idea and you need to like demo it out
to figure out if it works. I would much
rather someone just like throw slop at a
wall that you're never going to ship and
spend a day doing that, you know, less
than a day doing that rather than spend
a week doing it or organically as a
human like cuz you're going to throw it
away anyway and you don't even you might
throw it away because it's a bad idea,
but I'd rather prove it out. And so just
slop it up. And so this is why it's so
nuanced. I'm so like I'm so get so
worked up about sloppy PRs to open
source but it's because there's a time
and place for them and that's not the
time and place for them but there is and
so I would hire in that way and I think
the other thing that I don't know if
it's the right thing to do but I would
strive that that goal that I have I
would strive for everyone to have an
agent running at all the time again like
it doesn't need to be coding but to be
doing something extra for you I would
strive for that because
uh I I do a driving. That's my biggest
one. On the drive here, I had some deep
research going. And it's like I will
always spend 30 minutes on the
boundaries. When I wake up and before I
stop working, before I leave the house
or something, I spend 30 minutes stop
working. What can my agent be doing
next? That's that's slow. What's a slow
thing my agent could do for the next
time? And I knew I was going to drive
here for an hour. It finished far faster
than hour, but you know, it was just
like, oh, I need to do some library
research. Um, okay. find all the
libraries that have these properties
that are licensed in this way and I was
looking up some like HP3 stuff quick
stuff and so find build that ecosystem
graph for me. Um, right before I left, I
was working on something to do with this
vouching system and I didn't quite
understand the edge cases of what I was
doing. And I will think about that
manually, but why not just start just
start an agent to like look at this repo
and I use AMP to like consult the Oracle
like think deeply about um what the edge
cases might be, what am I missing? If I
had another two hours to work, I
wouldn't need the agent to do that. I
would have done it myself, but I don't.
So, why not have it do it? So, it's just
part of my goal to always have one
going. And I unfortunately don't have
one going cuz they finished it all right
now.
Interesting. And so this agent running
there is kind of does do I feel
correctly that it's now so natural that
it doesn't get in the way of your own
thinking like you do your own thinking
and you do your work but every now and
then you glance and you you ping it or
you start it or it's it's now it's so
it's not distracting, right? Cuz I think
that's
>> Yes. I actually turn off all the agenda
tools do this and I turn off the desktop
notifications. Yeah. Um I I think the
desktop notifications are for the most
part a mistake. Um so yeah, I turn those
off. I choose when I interrupt the
agent, not it doesn't get to interrupt
me. Um, so for sure and and then there's
another aspect where I think my
engineering has changed where I try to
identify
the tasks that don't require thinking
and the tasks that do require thinking
and and just delegate like delegate the
work to an agent. like it sometimes
it just feels productive to do the the
non-thinking tasks and you're like,
"Yeah, I did a lot today. I got got
this." But but a lot of times I just try
to just delegate that out. There's a lot
of people that that you know say like
you think less. And I think if you use
the tools wrong, you do think less
because you just like launch an agent
and I don't know, go watch YouTube or
scroll social media or something. But if
you instead view it as a way to choose
what you think about, then I think that
you don't need to sacrifice that
thinking. But I think the the problem is
uh the majority of the population
probably won't do that.
>> Yeah. But it's still I think it's good
food for thought and it's good to hear
from you on how you're using and it's
working for you. When did you start to
to have this second agent running? What
made the switch? Was it the models
getting better or
>> Yeah, I don't remember which model it
was, but there was a certain I tried
cloud code right when it came out. It
was just like March or May last year.
>> Yeah, it was March the beta. Yeah. And
the May public release.
>> Okay. I I don't think I used the beta,
so it was probably May. um wasn't a hu
wasn't super impressed honestly. Um and
then I mean really quickly by like the
summer at some point during the summer
oh I remember I remember um I saw so
many positive remarks about it that then
I started to get scared that I would be
behind on how to use a tool. And so I
actually started forcing myself to I
still didn't believe in it. So, I would
do everything manually, but I was
forcing myself to figure out how to
prompt the agent to produce the same
quality result. I was working much
slower because I was doubling the work
and it was more than double because it
was they're slow and it's we're going
back and forth and I already had the
work done and all this stuff, but I was
forcing myself to do it and you find
stuff that I couldn't figure it out. I
couldn't like it just wasn't there yet.
But then I found other stuff where it's
like, oh, I naturally got to the same
point that thousands of other people got
to, which like, oh, if I do a separate
planning step, it does so much better.
And everyone got there. And then I
figured out, oh, if I have a better test
harness for it to execute, it does a lot
better. And then, you know, I I think
everyone starts with like no agents. MD
or claw.MD or anything. Same thing. I
realized, oh, if it makes a mistake and
I add that just to agents MD, it never
makes that mistake again. Like, oh, and
like these these are just like
incremental things that I recognize when
I see people that are new or I've
watched a couple live streams like
lurked on live streams where like kind
of anti- AI people like try AI and it's
one of those things where I'm like
they're just swinging the hammer way way
off, right? Like it's because you
haven't it's the the thing is like it's
it's as if someone tried to like adopt
Git and they used it for an hour and
decided they weren't more productive
with it. Like it takes much longer than
an hour to get proficient with Git, but
you put in the effort and then you reap
the rewards later. And it's sort of the
same thing to me with AI tools.
>> What What would your first advice be for
someone who's like not
>> My first advice would be reproducing
your work with an agent. And if you
really really don't want an agent to
code, reproduce the research part of
your work with an agent. Um like there
there's a lot of people it's like I
don't want it to write code for me for
whatever reasons like uh but yeah just
kind of delegate some of the other
research part. There's so many places it
could be helpful. So it it doesn't need
to take you know you don't need to pick
up on the it must replace you as a
person kind of propaganda. You could
just find the the corners of where you
work and and replace those parts. One
thing that you you give people is you
give advice on for potential founders
because you're a successful founder. You
you've had an exit. You built up this
awesome company. You get a bunch of
emails from people asking, "Hey, I want
to be a founder." What's your advice?
And you you wrote about this. You shared
the email, but but can you tell us like
what advice you typically give people
and how is it received?
>> Uh well, I usually ask for something
more specific. Uh because yeah, if
someone's like, "What could I do to be
successful?" one, I will always disclaim
that you're consulting someone with
survivorship bias. So, you need to take
that into account. Um, but I'm willing
to share my experience as a survivor,
but just understand that there's
survivorship bias. Um, but usually I ask
for like what's what's something more
specific like what are you trying to do?
Um, and so we usually get to like should
I open source my project or not or
should I be remote or not or should I do
enterprise and and I don't know. Um, but
my my the most general advice I usually
give people is startups are much longer
than you think. Um, you're going to
probably work on it for I I say imagine
10 years. A lot of people say 5 years,
but I say imagine 10 years. Like is this
really something you want to work on for
10 years? And is it something that like
you need to have a certain amount of
hubris in order to say I'm going to work
on this for 10 years and I truly believe
I'm going to do it better than anyone
else. There's nothing behind that. no
substance behind it other than hubris.
So you need to have a certain amount of
of ego and hubris in your head to make
that but not too much where you'll be
blind to change coming in. So that's
usually like the first advice I give cuz
a lot of people have cool ideas but
they're going to burn out you know
relatively quickly. So uh that's where I
start. So currently you're advising some
companies. What are you seeing with
them? Like what what are servers doing
these days? What are they doing
differently than you know like earlier?
How's that landscape? Uh again it's
really contextual in terms of like if
you're an AI startup it's it's very very
different.
>> How how are AI startups
>> working differently?
>> They are there's a lot of pressure to go
faster than I've ever seen any startup
though. Um, I I think the industry is
moving so fast that I I don't advise any
AI startups, but I've talked to some of
them and it it's even as an adviser, I
feel like it's too much pressure because
they are just being pushed to prove
themselves quickly, whether it's through
traction or revenue or something. It's
sort of like there's this mentality
within that ecosystem where AI should
allow you to go crazy fast. And in
addition to that, there are a lot of
companies moving crazy fast. So, um, the
change is happening. I think that's the
one thing. Outside of that, I mean, like
I said, it's just it's just a ton of
opportunity in every space. Otherwise,
it's a lot of the same stuff. I mean,
it's remote versus non remote, open
source versus not open source. Do you
see the role of software engineers
changing now, especially at the A&E
companies where engineers like like
yourself, they're actually being way
more productive? They they can produce a
lot more code, a lot more output. Are
they being pushed into being like, you
know, like wearing more hats, talking to
the business, or being a bit more like a
mini founder, if you will?
>> I hesitate to say more productive. I I I
view that there's an expectation they
could do more. I don't think that's
necessarily more productive, but it's
more like you should be able to, for
example, build a full demo, design,
everything for your you don't need a
team to do that anymore, right? Like you
should be able to do that at least from
a demo perspective. There's no reason
not to because again you could ship slot
for that. That's fine. I mean this is
still the same but you should be able to
research effectively and and in a sense
handle more vague tasks. I'm seeing that
a lot more which is like just the
capacity to experiment is so much higher
I would say. But then when it turns into
productionizing something uh it feels
similar to what it's always been. I I
think that there's a lot of companies
that are eating the you know the dog
food of of of the AI companies of
shipping whatever and I think that's a
little scary.
>> Yeah. They look at entropic and they're
like oh they build cloth cowork in 10
days and it'll be billion dollar
company. They're freaking out of why
they're not doing that
>> there. I think a big change is from like
a preede perspective or yeah preede
perspective where you would be like I
need to raise a seed in order to build a
prototype. That's like like show me the
prototype because yeah, you should be
able to build that really quickly for
most things. There's still hard tech out
there that you can do that.
>> So, you do a bunch of coding, you do a
bunch of thinking about coding as well,
even as you're trying to fall asleep.
What refills your bucket uh outside
outside of coding, outside of tech?
Obviously like the stereotypical things
like just taking breaks and being with
my family and things like that, but I
mean I think the biggest thing is you
know I am introverted so just quiet solo
time um refills the most energy for me.
I live pretty close to the beach and
just if I'm in a bad mentality, things
aren't working, I'm feeling unproductive
or some something's going on, like just
closing my laptop and taking a walk
outside,
it like stuff like that helps a lot. I
have a lot of hobbies and stuff, but
it's I think like just as a general
recharge, it's it's that more than
anything. I know there's a lot of people
it's like going out with friends or
something like that, then I like that,
but that's not the full recharge for me.
And what's a book that would you
recommend and why?
>> Um, so I only I I pretty much only read
fiction outside of news. Um,
>> great.
>> Great. Okay. Uh, the most recent book of
fiction I read is an older book and it
it is an easy read, so I hope people are
like not like, "Oh, he's an idiot for
reading this." But, um, it was uh, what
is it called? The the Something Life of
Addie Laroo. It's just like kind of a
romantic type of fiction novel, but
yeah, it's just about it's about I think
it's like 10 years old. It's older now.
Um, but uh it's just about a a woman who
kind of sells her soul to live forever,
but the cost was no one remembers her
once they walk out the room and yeah,
it's just going through her whole life
of losing all human connection, but she
gets to live forever. Um, what that is
like and know I I like reading fiction.
though.
>> I I like reading fiction at night. It I
don't know. I don't know if it's
escapism or just like you just like, you
know, you get a little bit different.
Well, it's so so different to the coding
or anything. It may maybe just helps me
turns off the thing. I I personally I
probably read way more fiction than I do
professional non-fiction, honestly.
>> Yeah. Yeah. I'm I'm the same way. It's
my version of TV, too. TV to me is more
a social activity. Like, if if my wife
wants to watch something together, like
we'll watch a show. But if I'm alone,
I'm not going to watch a show. I'm gonna
read probably.
>> Awesome. Well, well, thanks so much for
going through all all of these details.
It was just not great to hear from how
you're working, the history of of Hashi
Corp. This was all just really
interesting and motivating.
>> Yeah, thank you. Thank you.
>> I hope you enjoyed this long and
interesting conversation with Michelle.
One thing that really stuck with me from
this conversation is Michel's own rule
for himself. Always have an agent that
does something. not necessarily coding,
just doing something. For example, while
he was driving to this podcast
recording, he had Deep Research running
before he leaves the house. He asks
himself, "What's a slow task that my
agent could do while I'm gone?" An
important part to all of this, he turns
off all notifications. The agent does
not get to interrupt him. He interrupts
the agent when he's ready. Michelle is
in charge and he has a buddy who does
the work that he has delegated while he
focuses on the problem that he is
solving. This is a nice challenge for
anyone listening. Next time you step
away from your desk, before you close
the laptop, ask yourself, what slow task
could an agent be doing while you're
gone? If you enjoy this episode, share
with a colleague who's thinking about
where software engineering could be
heading. And if you've not subscribed
yet, now is a good time. We have more
conversations like this one coming.
Thanks and see you in the next
Ask follow-up questions or revisit key timestamps.
Michelle Hashimoto, co-founder of HashiCorp and creator of Ghosty, shares his journey from self-taught coding to building modern cloud infrastructure. He recounts HashiCorp's origins from a failed university project, its pivot from initial commercialization failures, and candidly discusses diverse partnership experiences with AWS, Azure, and Google Cloud. A major focus is the transformative impact of AI on open source, leading to changes in contribution trust systems and new policies like Ghosty's vouching model. Michelle also details his personal AI-integrated workflow, always having an agent running, and offers advice for aspiring founders and reflections on the evolving landscape of software engineering and hiring.
Videos recently processed by our community