HomeVideos

Sam Altman said what???

Now Playing

Sam Altman said what???

Transcript

265 segments

0:00

Open AAI just got done hosting some sort

0:02

of town hall in which they brought in a

0:03

bunch of the developers and Sam

0:05

Jippidity Alman fielded a bunch of

0:07

questions from a bunch of developers and

0:09

tried to kind of give uh you know open

0:11

AAI or his perspective on what's going

0:13

to happen and there was honestly some

0:15

pretty good questions out there and some

0:17

kind of I I guess interesting uh

0:19

responses from Sam and not only that but

0:22

at the end of all of this or towards the

0:24

end there was kind of like a really you

0:27

know off-putting question but an even

0:29

more off-putting answer like I couldn't

0:31

even believe he would say it just makes

0:33

it makes uh makes things feel more

0:36

uncertain you know because here's the

0:38

deal is that typically if I'm going to

0:40

listen to the doom and gloom of AI you

0:42

got to go to Daario okay CEO of

0:44

Anthropic okay every single time you see

0:46

him talking about the doom and gloom

0:47

he's making the painful face and going

0:49

you know

0:52

I mean it's going to they're

0:54

going to create an oligarchy and I'm

0:56

just not sure how to prevent it I guess

0:58

everyone creat create an AI is going to

1:00

be a trillionaire and then we're going

1:01

to have the permanent underclass. I

1:03

mean, it's just one of the costs of

1:05

progress, you know, like every time

1:07

you're like, "Dude, yo, Dario, why why

1:09

you got to bring me down like this,

1:10

okay? Why are you talking about a

1:11

country of geniuses? Okay, it hurts my

1:13

feels." Anywh who, the town hall lasted

1:15

a little bit longer than an hour and so

1:17

I kind of picked out some of my favorite

1:19

parts and I'll go over them and we'll

1:20

yap about it for a second. Then we'll do

1:22

the ending one that's kind of I don't

1:23

know just I still I guess feel weird

1:25

about it. I kind of feel weird that

1:27

nobody feels weird about it. I don't I

1:28

don't know. Maybe I you know I get I get

1:31

so easily influenced. So this guy asks

1:33

like, "Hey, models are really expensive,

1:35

dude. You going you going to hook a bro

1:38

up? You going to make him cheaper?"

1:39

>> I think we should be able to deliver GPT

1:42

5.2x high level intelligence

1:47

by the end of 2027 for

1:50

Do you want to give a better guess? I

1:52

can give one otherwise.

1:54

Anyone want to give a guess? I would say

1:57

at least 100x less.

1:58

>> Uh but that's kind of a crazy statement,

2:00

right? So 5.2x high within the next uh

2:04

what is that 22 months is going to be

2:06

100x cheaper, which by the way is

2:09

actually pretty consistent from Samuel

2:11

Jippy over here that he actually does

2:13

say that AI is going to reduce the cost

2:14

10x every single year. So in 22 months,

2:17

23 months, we should see 5.2 2 being

2:21

100x cheaper, which actually would make

2:24

it usable by a lot of people. Now, I

2:26

have no idea how they're going to make

2:27

it 100x cheaper. It's just going to be

2:30

100x cheaper. I have a sneaking

2:32

suspicion that there's going to be one

2:33

man who has to figure this out, too.

2:36

Just carrying the weight of inference on

2:38

his shoulders. Before we get to, you

2:40

know, the the the crazy part of this

2:42

whole thing, there was one more. Hey,

2:44

guess what everybody? We had a celebrity

2:46

in the mist. Look at this one right

2:47

here.

2:48

>> I want to ask about something a little

2:49

bit different, though. more on the

2:50

technical side. I

2:51

>> uh by the way, this is the O dev

2:53

YouTuber YC founder.

2:55

>> One of the fears I have as the models

2:57

and the tools we used to build with get

2:58

better is that we might get stuck with

3:00

the way we have things working. Now,

3:02

>> Theo goes on and explains the problem a

3:04

little bit more and Sam of course gives

3:05

effectively an answer, non-answer.

3:07

Honestly, this is a very good question

3:10

and this is kind of like the big fear,

3:11

especially with ads kind of creeping in.

3:14

One can imagine that maybe the LLM just

3:16

always does give kind of a certain

3:18

answer once we get past the kind of

3:20

hard-coded big displayed ads. But

3:22

nonetheless, this happens all the time,

3:24

right? Like if you ask to build an

3:26

application for the web, the chance of

3:29

you getting React is just really, really

3:32

high. Like certain patterns just exist

3:34

and statistically speaking, that's the

3:36

pattern you should use. And if these

3:38

magical statistical machines do

3:40

anything, they're going to give you the

3:42

answer that you need for that question,

3:43

which just might be the same

3:45

technologies over and over again, which

3:46

makes it actually kind of interesting

3:47

because how does one make a better

3:49

system if one doesn't even know how to

3:52

make a system to begin with? And even

3:54

more so, how does one of these models

3:57

know which of the systems to pick when

4:00

80% of them keep using the same item?

4:02

Like, wouldn't that be the one you'd

4:04

want to choose, the one with the most

4:05

results? Great question. Very tough one.

4:07

There's I feel like there might be a

4:08

little bit of manual manipulation coming

4:10

in. Again, ads. Little bit worried about

4:12

that one. All right, let's actually get

4:14

to the big one. This is the question

4:16

that I just feel like I I I guess I

4:18

didn't see anyone talking about. It's a

4:20

really interesting question, but more so

4:23

it's the response. I'm going I'm going

4:24

to let the full question be played and

4:26

then part of his response be said and

4:28

we'll stop it at kind OF LIKE THE WHAT

4:30

PART. question is where does security

4:32

fall in this 126 roadmap and um broadly

4:36

how do you think about some of these

4:37

issues

4:38

>> security broadly or biocurity

4:39

specifically

4:40

>> um either preferably biocurity

4:42

>> there are many ways AI can go wrong in

4:44

2026 certainly one of them that we are

4:48

quite nervous about is bio uh the the

4:52

models are quite good at bio and right

4:55

now most of our and by like our not just

4:57

open eyes the world strategy is to try

5:00

to restrict who gets access to them and

5:02

you know put a bunch of classifiers to

5:04

not help people make novel pathogens. AI

5:07

is going to be a real problem uh for

5:10

bioteterrorism. Uh AI is going to be a

5:12

real problem for cyber security. AI is

5:14

also a solution to those things. It's a

5:16

solution to a lot of other problems as

5:17

well. I think we need like a societywide

5:20

effort provide the infrastructure for

5:21

this resilience not labs that we trust

5:24

to sort of always block what they're

5:26

supposed to block and you know there

5:27

will be many good models in the world.

5:29

We've been talking to a lot of bio

5:31

researchers companies about what it

5:34

takes to be able to deal with novel

5:36

pathogens. I think there are a lot of

5:37

people interested in the problem and a

5:39

lot of people reporting that AI actually

5:41

seems helpful at this but it won't be a

5:44

technological it won't be an entirely

5:46

technological solution. You will need

5:47

the world to think about these things uh

5:50

differently than we have been. So I am

5:53

very nervous about where things are but

5:55

I don't see a path other than the sort

5:57

of resiliencebased approach and it does

5:59

seem like AI can really help us do that

6:01

fast. If something goes really wrong

6:03

like visibly really wrong for AI uh this

6:05

year I think bio would be a reasonable

6:07

bet for what that could be and then as

6:10

we get into next year and the following

6:11

year you can imagine lots of other

6:12

things going really wrong too.

6:14

>> What like are we going to get a co 2.0?

6:18

Is that is that what he's dropping right

6:19

here? That we're about to have some

6:22

horrifying moment in time in 2026, 2027?

6:26

Like, this doesn't sound good. Also, by

6:28

the way, I hate this answer. AI is going

6:30

to be a real problem uh for

6:32

bioteterrorism. Uh AI is going to be a

6:34

real problem for cyber security. AI is

6:36

also a solution to those things. It's a

6:37

solution to a lot of other problems as

6:39

well. Dude, there's something about the

6:41

fact that you can create something that

6:44

could potentially have implications on

6:47

millions of people and then also be

6:49

like, "Hey, you know what? You know what

6:50

though? I know it's the problem, but

6:53

it's also the solution." It's like,

6:54

dude, isn't that kind of the thing they

6:56

always warned me about in history class?

6:58

Isn't that what the bad guy always does?

7:00

Creates the problem and then sells you

7:02

the solution. I I am Am am I wrong here?

7:06

Are we see are we seeing ourselves the

7:08

active creation of of of of the

7:11

classical historical bad guy? Anyway, so

7:13

that is like a realities that he's

7:14

dropping. And here's the thing is I'm

7:16

not even sure how AI solves that. If

7:18

somebody gets a hold of these models, if

7:20

things get cheaper, if technology gets

7:21

much much better, one could imagine that

7:24

producing large scale models that we

7:25

have today in a couple years will be

7:28

significantly cheaper due to

7:29

improvements in technology or whatever

7:31

nonsense actually ends up happening. And

7:32

then boom, all of a sudden they can just

7:35

have their own model that does things

7:36

that are terrible and then what? How do

7:39

you prevent that? How does AI prevent

7:40

that? It doesn't. Somebody just went

7:43

off, brain drained your super great

7:45

model, tossed it into a smaller one, and

7:48

bought a bang b Like what? This isn't

7:50

good. I don't know. I just feel like I

7:52

had to to talk about this one. I I know

7:54

this one's like the least happy of all

7:56

my videos. I just don't even know what

7:57

to do with it. I just feel like how come

7:59

no one's talking about this point? It

8:01

kind of feels uh rather interesting that

8:05

uh Mr. Jeypy over here thinks that 2026

8:08

or 2027 if something does go really

8:10

wrong, it's going to be bioweapons.

8:12

Dang, that's uh that's not a W. Not you

8:15

know what, today not a W. Anyways, the

8:18

name is I hope you enjoyed this nice

8:20

town hall.

8:23

You know,

8:25

you guys, you know, I always, you know,

8:27

I always appreciate you. You know what

8:28

I'm talking about. I hope you guys, you

8:30

know, I hope you feel encouraged. I hope

8:31

you're out there, uh, learning, actually

8:33

taking the time to get better at your

8:35

craft. Uh, you know, maybe not letting

8:39

Claudebot take over all of your private

8:41

messages and then accidentally exposing

8:43

it for the world to just come in for

8:45

free. You know, I hope you guys are, you

8:47

know, not doing that. Having a good day,

8:49

you know, again, don't worry that bad

8:53

Sam Alman's not going to get you. Don't

8:54

you worry.

9:05

Hey, you're probably wondering why am I

9:08

in San Francisco? Well, I'm here for a

9:10

big event and I'm going to stream the

9:12

whole thing. It's going to be live on my

9:13

channel for the next 5 days. So, if

9:14

you're watching this video, it's

9:15

probably live right now.

Interactive Summary

This video covers a recent OpenAI town hall where Sam Altman addressed developers regarding the future of AI. Key discussions include the massive cost reduction expected for high-level intelligence models by 2027, the risks of technical path dependency in software development, and a concerning outlook on biosecurity. Altman predicts that while AI will pose significant threats in bioterrorism and cybersecurity starting around 2026, it will also be the primary tool for creating societal resilience against these same threats.

Suggested questions

4 ready-made prompts