Sam Altman said what???
265 segments
Open AAI just got done hosting some sort
of town hall in which they brought in a
bunch of the developers and Sam
Jippidity Alman fielded a bunch of
questions from a bunch of developers and
tried to kind of give uh you know open
AAI or his perspective on what's going
to happen and there was honestly some
pretty good questions out there and some
kind of I I guess interesting uh
responses from Sam and not only that but
at the end of all of this or towards the
end there was kind of like a really you
know off-putting question but an even
more off-putting answer like I couldn't
even believe he would say it just makes
it makes uh makes things feel more
uncertain you know because here's the
deal is that typically if I'm going to
listen to the doom and gloom of AI you
got to go to Daario okay CEO of
Anthropic okay every single time you see
him talking about the doom and gloom
he's making the painful face and going
you know
I mean it's going to they're
going to create an oligarchy and I'm
just not sure how to prevent it I guess
everyone creat create an AI is going to
be a trillionaire and then we're going
to have the permanent underclass. I
mean, it's just one of the costs of
progress, you know, like every time
you're like, "Dude, yo, Dario, why why
you got to bring me down like this,
okay? Why are you talking about a
country of geniuses? Okay, it hurts my
feels." Anywh who, the town hall lasted
a little bit longer than an hour and so
I kind of picked out some of my favorite
parts and I'll go over them and we'll
yap about it for a second. Then we'll do
the ending one that's kind of I don't
know just I still I guess feel weird
about it. I kind of feel weird that
nobody feels weird about it. I don't I
don't know. Maybe I you know I get I get
so easily influenced. So this guy asks
like, "Hey, models are really expensive,
dude. You going you going to hook a bro
up? You going to make him cheaper?"
>> I think we should be able to deliver GPT
5.2x high level intelligence
by the end of 2027 for
Do you want to give a better guess? I
can give one otherwise.
Anyone want to give a guess? I would say
at least 100x less.
>> Uh but that's kind of a crazy statement,
right? So 5.2x high within the next uh
what is that 22 months is going to be
100x cheaper, which by the way is
actually pretty consistent from Samuel
Jippy over here that he actually does
say that AI is going to reduce the cost
10x every single year. So in 22 months,
23 months, we should see 5.2 2 being
100x cheaper, which actually would make
it usable by a lot of people. Now, I
have no idea how they're going to make
it 100x cheaper. It's just going to be
100x cheaper. I have a sneaking
suspicion that there's going to be one
man who has to figure this out, too.
Just carrying the weight of inference on
his shoulders. Before we get to, you
know, the the the crazy part of this
whole thing, there was one more. Hey,
guess what everybody? We had a celebrity
in the mist. Look at this one right
here.
>> I want to ask about something a little
bit different, though. more on the
technical side. I
>> uh by the way, this is the O dev
YouTuber YC founder.
>> One of the fears I have as the models
and the tools we used to build with get
better is that we might get stuck with
the way we have things working. Now,
>> Theo goes on and explains the problem a
little bit more and Sam of course gives
effectively an answer, non-answer.
Honestly, this is a very good question
and this is kind of like the big fear,
especially with ads kind of creeping in.
One can imagine that maybe the LLM just
always does give kind of a certain
answer once we get past the kind of
hard-coded big displayed ads. But
nonetheless, this happens all the time,
right? Like if you ask to build an
application for the web, the chance of
you getting React is just really, really
high. Like certain patterns just exist
and statistically speaking, that's the
pattern you should use. And if these
magical statistical machines do
anything, they're going to give you the
answer that you need for that question,
which just might be the same
technologies over and over again, which
makes it actually kind of interesting
because how does one make a better
system if one doesn't even know how to
make a system to begin with? And even
more so, how does one of these models
know which of the systems to pick when
80% of them keep using the same item?
Like, wouldn't that be the one you'd
want to choose, the one with the most
results? Great question. Very tough one.
There's I feel like there might be a
little bit of manual manipulation coming
in. Again, ads. Little bit worried about
that one. All right, let's actually get
to the big one. This is the question
that I just feel like I I I guess I
didn't see anyone talking about. It's a
really interesting question, but more so
it's the response. I'm going I'm going
to let the full question be played and
then part of his response be said and
we'll stop it at kind OF LIKE THE WHAT
PART. question is where does security
fall in this 126 roadmap and um broadly
how do you think about some of these
issues
>> security broadly or biocurity
specifically
>> um either preferably biocurity
>> there are many ways AI can go wrong in
2026 certainly one of them that we are
quite nervous about is bio uh the the
models are quite good at bio and right
now most of our and by like our not just
open eyes the world strategy is to try
to restrict who gets access to them and
you know put a bunch of classifiers to
not help people make novel pathogens. AI
is going to be a real problem uh for
bioteterrorism. Uh AI is going to be a
real problem for cyber security. AI is
also a solution to those things. It's a
solution to a lot of other problems as
well. I think we need like a societywide
effort provide the infrastructure for
this resilience not labs that we trust
to sort of always block what they're
supposed to block and you know there
will be many good models in the world.
We've been talking to a lot of bio
researchers companies about what it
takes to be able to deal with novel
pathogens. I think there are a lot of
people interested in the problem and a
lot of people reporting that AI actually
seems helpful at this but it won't be a
technological it won't be an entirely
technological solution. You will need
the world to think about these things uh
differently than we have been. So I am
very nervous about where things are but
I don't see a path other than the sort
of resiliencebased approach and it does
seem like AI can really help us do that
fast. If something goes really wrong
like visibly really wrong for AI uh this
year I think bio would be a reasonable
bet for what that could be and then as
we get into next year and the following
year you can imagine lots of other
things going really wrong too.
>> What like are we going to get a co 2.0?
Is that is that what he's dropping right
here? That we're about to have some
horrifying moment in time in 2026, 2027?
Like, this doesn't sound good. Also, by
the way, I hate this answer. AI is going
to be a real problem uh for
bioteterrorism. Uh AI is going to be a
real problem for cyber security. AI is
also a solution to those things. It's a
solution to a lot of other problems as
well. Dude, there's something about the
fact that you can create something that
could potentially have implications on
millions of people and then also be
like, "Hey, you know what? You know what
though? I know it's the problem, but
it's also the solution." It's like,
dude, isn't that kind of the thing they
always warned me about in history class?
Isn't that what the bad guy always does?
Creates the problem and then sells you
the solution. I I am Am am I wrong here?
Are we see are we seeing ourselves the
active creation of of of of the
classical historical bad guy? Anyway, so
that is like a realities that he's
dropping. And here's the thing is I'm
not even sure how AI solves that. If
somebody gets a hold of these models, if
things get cheaper, if technology gets
much much better, one could imagine that
producing large scale models that we
have today in a couple years will be
significantly cheaper due to
improvements in technology or whatever
nonsense actually ends up happening. And
then boom, all of a sudden they can just
have their own model that does things
that are terrible and then what? How do
you prevent that? How does AI prevent
that? It doesn't. Somebody just went
off, brain drained your super great
model, tossed it into a smaller one, and
bought a bang b Like what? This isn't
good. I don't know. I just feel like I
had to to talk about this one. I I know
this one's like the least happy of all
my videos. I just don't even know what
to do with it. I just feel like how come
no one's talking about this point? It
kind of feels uh rather interesting that
uh Mr. Jeypy over here thinks that 2026
or 2027 if something does go really
wrong, it's going to be bioweapons.
Dang, that's uh that's not a W. Not you
know what, today not a W. Anyways, the
name is I hope you enjoyed this nice
town hall.
You know,
you guys, you know, I always, you know,
I always appreciate you. You know what
I'm talking about. I hope you guys, you
know, I hope you feel encouraged. I hope
you're out there, uh, learning, actually
taking the time to get better at your
craft. Uh, you know, maybe not letting
Claudebot take over all of your private
messages and then accidentally exposing
it for the world to just come in for
free. You know, I hope you guys are, you
know, not doing that. Having a good day,
you know, again, don't worry that bad
Sam Alman's not going to get you. Don't
you worry.
Hey, you're probably wondering why am I
in San Francisco? Well, I'm here for a
big event and I'm going to stream the
whole thing. It's going to be live on my
channel for the next 5 days. So, if
you're watching this video, it's
probably live right now.
Ask follow-up questions or revisit key timestamps.
This video covers a recent OpenAI town hall where Sam Altman addressed developers regarding the future of AI. Key discussions include the massive cost reduction expected for high-level intelligence models by 2027, the risks of technical path dependency in software development, and a concerning outlook on biosecurity. Altman predicts that while AI will pose significant threats in bioterrorism and cybersecurity starting around 2026, it will also be the primary tool for creating societal resilience against these same threats.
Videos recently processed by our community