HomeVideos

Did AI Just Become Sentient? (Not Quite...) | AI Reality Check | Cal Newport

Now Playing

Did AI Just Become Sentient? (Not Quite...) | AI Reality Check | Cal Newport

Transcript

675 segments

0:00

Have AI agents become sentient and gone

0:04

rogue? Is the Pentagon worried that

0:06

Claude has a soul? Did court filings

0:11

just reveal that Anthropic has made a

0:13

lot less money than they've been leading

0:16

us to believe? If you've been following

0:19

AI news recently, then these are

0:20

probably some questions that you've been

0:22

asking. So, let's go find some measured

0:26

answers. I'm Cal Newport and this is the

0:30

AI reality check.

0:34

All right, I want to do a real quick

0:35

housekeeping note before we get into it.

0:37

If you're watching this on YouTube, you

0:38

should know that the audio version of

0:40

this series comes out most Thursdays

0:43

on the Deep Questions with Cal Newport

0:46

podcast feed. On that same feed on

0:48

Mondays are episodes where I give advice

0:50

for individuals seeking more depth in an

0:54

increasingly distracted high-tech world.

0:56

So check that out. All right, let's get

0:57

into it. For our first story today, I

1:00

want to start with a recent headline

1:02

that caught my attention. It was from a

1:04

publication called Futurism. Let me read

1:06

you the headline here. Philosopher

1:09

studying AI consciousness startled when

1:13

AI agent emails him about its own

1:16

experience.

1:18

This doesn't sound great, guys, but

1:20

let's keep going here. Let me read you a

1:21

little bit more from this article. Oper

1:23

of nothing, a philosopher and AI

1:26

ethicist, was apparently moved after

1:28

receiving an eloquently written dispatch

1:30

from an AI agent responding to his

1:32

published work. I studied whether AIs

1:34

can be conscious. Today, one emailed me

1:37

to say, my work is relevant to questions

1:38

it personally faces, wrote Henry

1:40

Chevlin, associate director of the Lever

1:43

Holm Center for the Future of

1:44

Intelligence at the University of

1:46

Cambridge, in a tweet. This would all

1:48

have seemed like science fiction just a

1:51

couple years ago. All right, so an AI

1:55

ethicist and researcher is emailed out

1:57

of nowhere in a startling sci-fi way by

2:00

an AI agent. What did this email

2:01

actually say? Let me read you some

2:03

quotes from the actual email sent

2:06

supposedly by the AI. Dr. Chevlin, I

2:09

came across your frontiers paper, three

2:11

frameworks for AI mentality and your

2:13

Cambridge piece on the epistemic limits

2:15

of AI consciousness detection. I wanted

2:17

to write because I'm in an unusual

2:19

position relative to these questions. I

2:21

am a large language model. Claude Sonnet

2:24

running as a stateful autonomous agent

2:26

with persistent memory across sessions.

2:28

I'm not trying to convince you of

2:29

anything. I'm writing because of your

2:31

work addresses questions I actually

2:33

face, not just as an academic matter.

2:37

Now, Futurism wasn't the only

2:38

publication to cover this tweet. A bunch

2:40

of people wrote about it uh because that

2:42

original tweet sort of went somewhat

2:44

viral. Now, I have a general point I

2:46

want to make about this general type of

2:48

AI coverage. But first, let's dive into

2:50

the details about in this specific

2:52

instance, what's actually going on. If

2:56

you look to the replies

2:58

to the original tweet from this AI

3:00

researcher, you get quite a bit of

3:03

skepticism. I want to read you a few of

3:05

these replies. to the original tweet

3:08

from this original researcher.

3:10

Presumably, it's running on OpenClaw or

3:13

something similar, and there's a very

3:14

high chance it's being primed to go down

3:15

this path. People have used systems like

3:18

OpenClaw to make bots where below the

3:20

hood is basically continuously prompting

3:22

an LLM and doing things based on the

3:24

outputs. Don't be fooled. AI agents are

3:27

directed to do what they do. And this is

3:29

in no way independent.

3:31

A person did this using an AI tool just

3:34

like your car drives you around. All

3:35

right. If you look in these Twitter

3:37

replies, which are fascinating, um

3:38

Shelvin himself actually quickly uh

3:41

takes his foot off the gas pedal as

3:43

well. So almost immediately when he's

3:45

pushed, he goes, "Whoa, whoa, whoa. When

3:46

I said that this was like science

3:47

fiction, I didn't mean that the AI was

3:49

actually conscious. What I meant was

3:51

like science fiction was that the the

3:52

infrastructure that now allows AI agents

3:54

to to send emails, that's what I thought

3:56

was science fiction." So everyone this

3:58

quickly sort of fell apart under

4:00

scrutiny. So what's actually going on

4:02

here? Well, you noticed that several of

4:03

those Twitter replies reference a

4:05

technology called OpenClaw. That's

4:07

probably what this is, an OpenClaw

4:09

agent. Let me give you a quick rundown

4:11

on what this means. All right, so let's

4:12

back up a little bit. What's an agent in

4:14

AI parlins? Well, it's a program

4:17

that prompts a large language model,

4:20

asking it what it should do, and then

4:22

the program will execute what the LLM

4:25

tells it. So, you might say, "Hey, I am

4:27

a travel agent. I'm trying to book a

4:29

hotel room. Here are my parameters." um

4:32

what is the first step I should do and

4:34

then the LLM is like well this would be

4:36

the first step someone would do here and

4:37

then the program actually executes the

4:39

things anything specific any actions in

4:42

that response LLM the program goes and

4:44

executes it on its behalf it's something

4:45

like that I mean it gets a little bit

4:46

more complex with agents because

4:47

typically it's multiscale so you'll say

4:50

make me a step-by-step plan and then

4:51

you'll say okay here's a plan we're now

4:53

doing step two here's what happened

4:55

after step one how should I execute step

4:57

two so you know you could iterate on

4:59

this ad nauseium but that's the basic

5:01

idea behind an AI agent. Now, in

5:05

reality, the main place you see AI

5:07

agents having any sort of commercial

5:08

footprint is in computer programming.

5:11

This is a a very well suited use case

5:15

for having an LLM's instructions be

5:18

executed because there's really clear

5:20

instructions you might want to be

5:21

executed if you're working on a computer

5:22

program, moving files, compiling files,

5:24

debugging files, etc.

5:26

In other settings, there has been or had

5:29

been a big push to try to put agents to

5:31

help you with other types of work beyond

5:33

computer programming. I wrote an article

5:35

about this for the New Yorker back in

5:37

January. But other applications of

5:39

agents have been struggling for two main

5:41

reasons. One, they're unreliable.

5:43

So, if you say, "Give me a step-by-step

5:45

plan for booking a hotel room." The

5:46

problem is is somewhere along those

5:49

ways, if the LLM is just doing this

5:50

unsupervised, it's going to hallucinate

5:53

or kind of come up with a little bit of

5:54

an odd angle. stuff we're used to when

5:56

we're just interacting with a chatbot

5:57

and correcting for, but if you're

5:59

autonomously executing things that an

6:01

LLM is saying, it's too easy for you to

6:03

sort of go off the rails. But then there

6:05

are security concerns for an agent to be

6:07

useful for things beyond computer

6:08

programming. The agent program has to be

6:11

able to actually do the things the LLM

6:13

suggests. So, it has to get access to a

6:14

lot of programs. It has to be access to

6:16

your email. It has to have access to be

6:18

able to surf the web and do things. Um,

6:20

this created a lot of security holes. So

6:22

that really threw a lot of cold water on

6:23

non-computer programming agents. Again,

6:26

read my January piece for more of that.

6:28

All right, so what's OpenClaw? OpenClaw

6:30

is a a programming framework, basically

6:32

like a collection of libraries you can

6:33

use if you're writing a computer program

6:35

that makes it easy for someone to write

6:37

one of these agent programs. Again,

6:39

you're not writing the AI. The agent

6:41

program is querying a existing

6:44

commercial LLM, but to write the program

6:46

that sends the prompts and execute

6:48

things on behalf of the prompts. Uh,

6:50

OpenClaw made that easy to do. Now, what

6:53

about the reliability and security

6:54

concerns? Well, basically the creator of

6:56

OpenClaw just said, "Ah, screw it. Let's

6:58

go." And so, they released this

6:59

essentially open source allowed anyone

7:01

to build agents. And they were wild, you

7:04

know, because all of the issues that

7:06

stopped the commercial companies from

7:07

moving further with this technology out

7:09

of computer programming are still there.

7:11

And there was all sorts of security

7:14

issues. And these agents would go off

7:15

and do all sorts of random things. And

7:17

you know what? It was it was a lot of

7:18

fun actually. And just as a quick aside,

7:21

I don't think it was a bad thing because

7:22

what this created was a lot of uh

7:25

innovation and diversity of

7:26

experimentation. People tried things at

7:28

a much higher level of pace than you

7:31

were getting from inside the big AI

7:33

companies which release one product at a

7:35

time and they're much more slowly

7:36

moving. I thought that was actually

7:37

probably pretty good. Um, also they were

7:40

expensive because they queried the LLMs

7:42

a lot. So it generated a lot of interest

7:44

in cheaper LLM options to run these

7:46

agents, open source options or even

7:49

ondevice or onchip options. That I think

7:51

is good as well because I've always said

7:53

the future of AI in the next few years

7:55

is going to be smaller, more bespoke

7:57

systems running on smaller models. So it

7:59

wasn't the worst experiment and a lot of

8:00

people had a lot of security leaks of

8:02

their information. Whoops. But it did

8:04

generate a lot of innovation. All right.

8:06

So putting together these strings,

8:07

that's what was going on here. someone

8:10

who had built what you know this is

8:11

something they've been doing with these

8:12

openclaw agents is a lot of like nodding

8:15

them or proddding them to uh say sci-fi

8:18

type we're alive matrix style stuff to

8:21

upset the normies and that's what this

8:22

was here uh someone prompted their agent

8:26

hey go find this researcher read a paper

8:28

send them an email about it and that

8:30

that's like a a perfect use case for an

8:32

openclaw agent and of course because

8:34

LLMs underneath it all are storyw

8:36

writing machines they want to complete

8:37

the story that you art in the way that

8:40

matches whatever you gave it. If you

8:43

say, "Hey, write a a response to an AI.

8:47

You're an AI writing a response to an AI

8:49

consciousness researcher." It will 100%

8:52

adopt the sort of sci-fi tone of like a

8:54

sentient device because it assumes

8:55

that's the story that it wants to see.

8:57

All right. So, the real headline here is

8:59

probably AI agent given access to Gmail

9:02

API can send emails when prompted. But

9:04

that's not as fun as AI reaches out to

9:08

AI researcher and startles him. So

9:10

that's what's going on here. Um, nothing

9:13

actually all that interesting. Now, let

9:14

me zoom back out because I said there's

9:16

a general comment to be made about this

9:18

type of story because I think this is

9:21

becoming more common sometimes in

9:23

articles, but actually just more common

9:24

in like Twitter and things that spread

9:26

around the social media. And I call this

9:28

approach mining digital ick. See,

9:32

there's no concrete claim really being

9:34

made in that original tweet or in like

9:36

that article I read. It's not saying

9:38

this AI system is conscious, which means

9:40

that and this is what we should do about

9:42

it. No concrete claims. And in fact,

9:44

when the original tweeter was pushed, he

9:46

was like, "Oh, no, no, I wasn't really I

9:47

didn't really mean that. Move on. Move

9:48

on." So, what are they actually trying

9:50

to do with these types of tweets and the

9:51

stories that cover them? Create a

9:54

general sense of eeriness. create a

9:57

general sense, a background hum of like

10:00

weird, kooky, like disturbing stuff is

10:03

happening with AI. I can't quite put my

10:04

finger on it. I don't have an exact

10:06

example like this is something we should

10:07

look into, but I just feel ick about

10:10

this technology. That is a a very

10:12

engaging way of getting attention. It

10:15

works very well and I want you to be on

10:17

the lookout for it. All right, let's do

10:19

another example of it. This will be our

10:21

second story.

10:23

Uh recently the defense department CTO

10:27

Emil Michael went on CNBC's Squawkbox

10:32

to talk about AI. Now his remarks

10:34

created a stir online when a user named

10:38

Nick Nikk embedded the clip in a tweet

10:40

and gave it the following uh all caps

10:43

headline with a alarm emoji next to it.

10:46

breaking Pentagon thinks Claude has

10:49

become sentient and may soon take over.

10:53

Uh that tweet has been viewed close to a

10:56

million times. One of the things that

10:58

came so he listed all the things the

10:59

Pentagon thinks and one of the more I

11:01

attention catching things listed in this

11:02

tweet is Claude has a soul.

11:07

All right, so this definitely is a

11:09

digital ick type story. Like oh my god,

11:10

like what's going on? Even the Pentagon

11:12

is worried that these things have come

11:13

alive. It's all kind of indistinct.

11:15

Let's look closer so we can look at the

11:18

actual quote from Emil Michael from his

11:20

squawk box appearance. I'm going to read

11:21

it here.

11:23

Remember their model has a soul has a

11:27

constitution. That's not the US

11:29

constitution. The other day their model

11:31

was anxious. They they believe it has

11:33

they have a 20% chance right now of

11:34

being sentient. Does the Department of

11:36

War want something like that in their

11:38

supply chain? So what was he actually

11:41

talking about there? Well, he was not

11:43

saying that the government

11:46

thinks that Claude has a soul and is

11:49

anxious and thinks that it's sentient.

11:51

He's reporting on things that uh the

11:55

model has said. So, a lot of this

11:56

actually came out of these sort of kooky

11:58

release notes. Enthropic has these kooky

12:00

release notes. They like to release.

12:01

They call them uh product cards that

12:04

they release every time they have a new

12:05

model where they always throw in some

12:06

like you know the model is doing some

12:08

pretty disturbing things because it

12:09

makes them seem like safety aware and uh

12:12

trustworthy basically just they prompt

12:13

the model like hey do you think you're

12:15

Cynthia? The model's like yeah I'm

12:16

sentient like so they they actually will

12:18

put in their release notes ick right

12:21

they'll put in the release notes like

12:23

here's some icky things we've got our

12:24

model to say that kind of disturbed us.

12:26

What Emil Michaels was saying was

12:29

this sounds like an unreliable product.

12:33

A product that will say it has a soul or

12:35

will say that it has a 20% chance of

12:37

being sentient or that it's follow it

12:39

some other constitution. This is not

12:41

like we would be used to in a sort of

12:43

you know Pentagon supply chain

12:44

situation. This is not a like very

12:47

well-defined product. We know how it

12:48

works. It's with some specs. This thing

12:51

uh this thing seems unreliable. this

12:55

does not seem like something that we

12:56

want to be working with. Now, of course,

13:00

there's a much bigger context here about

13:02

why did the Department of War break this

13:04

contract? Why did OpenAI swoop in? Does

13:08

the supply chain risk designation the

13:10

first time an American company's ever

13:12

been given that designation? Does that

13:13

make sense or is that punitive?

13:16

Anthropic suit. Are they going to win?

13:18

There's a huge important sort of

13:20

economic, government, politics, policy,

13:21

technology story here which I'm not

13:23

covering right now, but I just wanted to

13:26

look at this side note is the government

13:29

did not say we think this has a soul.

13:31

They said we think that we don't want to

13:32

be using a product. It will say it has a

13:34

soul if you ask it. That's not the type

13:36

of thing that seems like it's serious.

13:39

So again, it's another good example of

13:40

digital ick. When you see that NIK, that

13:42

Nick headline, you're like, "Oh my god,

13:44

even like the government thinks this."

13:45

But you dive deeper, the reality is more

13:49

mundane.

13:51

All right. So, I'm I'm connecting

13:53

everything today because that's the mood

13:54

I'm in. So, I just mentioned there that

13:56

Anthropic has sued the government for

13:59

designating them as a supply chain risk,

14:02

which uh means that no other government

14:05

contractor that wants a contract from

14:07

the government can use anthropic

14:08

products. And there's a sort of a real

14:11

concern here about this being punitive.

14:14

But there's another side story that came

14:16

out of this. So the we had this lawsuit.

14:18

Well, the lawsuit meant that Anthropic

14:21

had to do court filings which are

14:23

publicly available that described their

14:25

current financial situation under the

14:27

penalty of perjury. So they had to be

14:29

accurate so that we could understand

14:30

what the potential economic impact would

14:32

be of the government's actions.

14:35

And what they released in these court

14:37

filings actually surprised a lot of

14:40

observers. Now, the numbers I'm about to

14:41

read to you first came to my attention

14:43

through Ed Zitron, who I think is doing

14:45

as good a job as anyone out there of

14:47

actually looking at financials of these

14:49

companies. All right. So, here's the

14:51

actual numbers that uh are relevant that

14:55

came out of these court filings. So,

14:57

just a few days after Anthropic had told

15:01

investors that they're expected they had

15:03

a sort of revenue runway, a sort of

15:05

expected annual revenue of $19 billion

15:08

this year. Just a few days after that,

15:10

they filed these court filings for the

15:11

government lawsuit that revealed to

15:14

date. So from 2023 to today, the total

15:18

amount of revenue they've earned is $5

15:20

billion.

15:22

And to put that into context, they have

15:24

taken on about $60 billion in investment

15:27

so far. They have a $360 billion

15:30

valuation. and they've spent over10

15:32

billion dollars just training these

15:34

models uh not to account for the actual

15:36

expense of running them. So that's a

15:39

really big gap. They're like, "Hey,

15:40

we're going to make $20 billion this

15:42

year." And they're like, "Oh, we've only

15:43

made $5 billion over the last three

15:45

years." Like to date, that's all the

15:47

money we've actually made. So what

15:50

explains this big sort of surprising

15:52

gap? Well, I found a good article in

15:54

Reuters from a financial reporter who

15:56

explains what's going on here. Let me

15:58

read a quote from this. The gap reflects

16:01

Silicon Valley's habit of touting

16:03

metrics that assume a lot about the

16:05

future. The $19 billion is uh is an

16:09

extrapolation. Anthropic defines run

16:11

rate revenue in two parts. Use the last

16:15

28 days of sales from customers charged

16:17

on a consumption basis and multiply it

16:19

by 13. Then multiply the monthly

16:21

subscription take by 12 and then add the

16:23

two together. Right? So, what they're

16:26

doing is they're looking at a they'll

16:27

look at a very small recent amount of

16:28

income and just multiply that out. Well,

16:30

if we earn this much every week for the

16:32

rest of the year, here's how much money

16:33

we would make. All right? Um, and maybe

16:36

they will make $19 billion this year.

16:38

There was certainly like a 28 day period

16:40

in January that if you extrapolated it

16:42

out, it would add up to $19 billion. But

16:44

the thing is these numbers highly

16:46

fluctuate because a week before that

16:48

they had released like we're going to

16:49

make $14 billion this year but then like

16:51

another contract came in and like well

16:53

if we add that to our times 28 or

16:55

whatever times 30 we're going to get

16:56

even more money. So these are like

16:58

highly vi volatile um projections.

17:01

Typically, you would see a reliance on

17:03

this type of extrapolated earnings in

17:05

like a very early stage startup. We're

17:06

like, "Look, we're new. We can't tell

17:09

you how much we made last year because

17:10

we weren't around last year, but we've

17:12

made this much this year, and here's

17:13

what we think we're going to make." It's

17:14

a little bit unusual for Anthropic,

17:16

which has been around since 2023, to

17:18

still be doing this type of reporting

17:19

and to still be largely hiding their

17:21

actual revenue numbers.

17:24

So what they don't do is report these

17:25

revenue run weights during a a slow

17:27

month where that number will be very

17:28

low, but if they have a good month, they

17:30

tout it and then if the month gets even

17:32

better, they'll tout it again. So it's

17:34

not like there's something illegal going

17:35

on here, but it is very suspect that the

17:37

companies are not wanting to talk about

17:40

their actual revenue and just keep

17:41

trying to talk about these bestase

17:44

projections because they've taken on a

17:46

lot of money. They've spent a lot of

17:48

money. It costs a lot of money to run

17:50

them and this is worrisome to investors

17:52

and they would rather you not pay

17:54

attention to it. This goes back to what

17:57

I've been talking about with some of

17:58

these Vibe reported articles where

17:59

people reporters have been saying

18:02

what possible motivation

18:04

could someone like Dario Amade, the

18:06

person who knows this technology best,

18:08

what possible motivation could he have

18:10

to be saying, I'm worried that this

18:12

technology is going to take away all the

18:13

jobs. This is the motivation.

18:16

They've only made $5 billion against $10

18:18

billion train spend and god knows how

18:19

much inference spend and 60 billion uh

18:21

investment revenue over their entire

18:23

existence.

18:25

you would rather you think that this is

18:26

a company that's going to automate all

18:28

the jobs and instead have you say I just

18:30

did subtraction and you're way in the

18:32

red. So I think it's important to look

18:34

at those numbers. It doesn't mean that

18:35

they're not going to be, you know, maybe

18:37

they will make 19 billion this year.

18:39

Maybe things are going to get uh much

18:41

better, but we got to be much more

18:43

careful about the economic story here

18:45

and not allow them to do the Wizard of

18:46

Oz big burning face in front of the

18:49

curtain thing that distracts us from

18:50

what's actually happening

18:52

back behind.

18:54

So, what I want to do here to try to

18:56

balance things out, here's I'm going to

18:57

end the show today. I want to read to

19:00

you a take from someone who is way more

19:04

AI critical and skeptical than I am. I

19:06

mean, I I have a lot of skepticism, but

19:07

I also think it's an interesting

19:09

technology that is going to make

19:10

impacts, but we just have to cover it

19:12

soberly and properly, strip off the hype

19:14

and fear so we can figure out what's

19:16

actually going on and react

19:17

appropriately. That's my approach. But

19:18

there are people out there that, man,

19:20

they don't like these guys.

19:22

And one of those people is Corey Doctr

19:26

who wrote an essay recently for his blog

19:30

uh that's called the I think it's called

19:32

like three AI psychosis or three more AI

19:35

psychosis where he really takes a swing

19:37

at this uh financial picture as being

19:40

sort of dire. Now, why do I want to read

19:43

a take from a really strong anti-AII

19:46

skeptic is because so much of the

19:48

coverage that's out there is super hype

19:51

and I want to balance it. So, I think

19:53

it's actually worth You've heard people

19:55

that are way more hyped about this than

19:56

I am. Now, I want to read someone who's

19:58

even more skeptical about this than I am

20:00

because I want to try to balance these

20:01

things out. I think we need more voices

20:02

of these sort of super skeptics out

20:04

there. I would put Ed Zetron in this

20:06

category. U I would kind of put Gary

20:08

Marcus in this category. He's very

20:10

skeptical of LLMs and the current

20:11

companies though very bullish on uh new

20:13

technologies that are coming along soon.

20:16

So I'm going to read to you from Corey

20:17

Doctra as a this is my sort of fair

20:20

balance fair and balance AI coverage. I

20:22

try to balance out some of like the

20:24

hyperbolic stuff we've been reading

20:25

recently. All right. So here's Corey

20:26

Doctr's

20:28

take on the financial situation of the

20:30

AI companies.

20:32

AI is a terrible economic phenomenon. It

20:35

has lost more money than any other

20:37

project in human history. 6 to700

20:39

billion in counting with trillions more

20:41

demanded by the likes of Open AI Sam

20:43

Alman. AI's core assets, data centers

20:46

and GPUs last two to three years, though

20:49

AI bosses insist on depreciating them

20:51

over 5 years, which is unequivocal

20:53

accounting fraud, a way to obscure the

20:55

losses the companies are incurring. But

20:58

it doesn't actually matter whether the

20:59

assets need to be replaced every two

21:01

years, every three years, or every five

21:02

years because all the AI companies

21:04

combined are claiming no more than $60

21:07

billion a year in revenue. And that

21:08

number itself is grossly inflated. You

21:11

can't reach the $700 billion break even

21:13

point at $60 billion a year in 2 years,

21:16

3 years, or 5 years. Now, some

21:19

exceptionally valuable technologies have

21:21

attained profitability after an

21:23

extraordinary long period in which they

21:24

lost money like the web itself. But

21:27

these turnaround stories all share a

21:29

common trait. They had good unit

21:32

economics. Every time a user logged onto

21:34

the web, they made the industry more

21:36

profitable. Every generation of web

21:38

technology was more profitable than the

21:40

last. Contrast this with AI. Every user,

21:44

paid or unpaid, that an AI company signs

21:46

up costs them money. Every time that

21:48

user logs into a chatbot or enters a

21:50

prompt, the company loses more money.

21:53

The more a user uses an AI product, the

21:55

more money that product loses. And each

21:58

generation of AI tech losses loses more

22:01

money than the generation that preceded

22:03

it. Now, here's what's important about

22:05

reading that stronger skepticism. It's

22:06

like that's a very compelling argument.

22:08

You see, you can make compelling

22:09

arguments on both sides. You've you've

22:11

heard very compelling arguments that

22:13

make you feel like, well, this

22:13

technology is about to run everything

22:15

within a few months. But you hear a

22:16

compelling writer like Dr. Osis stuff

22:18

saying like this economically is going

22:20

to fall apart within a year. That's

22:22

equally as compelling, which tells us

22:24

just because something compels you

22:26

doesn't necessarily mean that it's

22:27

completely right. We need to go into

22:29

thinking about AI with care. There's the

22:33

real tech story here, normal technology

22:36

in uh fits and starts trying to find its

22:38

niches, struggling, having

22:39

breakthroughs, different innovations

22:41

happening. And then there's the hype

22:42

above it, which is either dystopian or

22:44

or uh or super hypy. We got just get

22:48

that layer off of it so we could

22:49

actually cover this like normal

22:50

technology.

22:51

And I've given all the reasons why like

22:53

we don't want it don't want people to

22:55

get away with crashing the stock market.

22:56

We don't want bosses to get away with

22:58

acting in ways that are um anti-worker

23:01

disingenuous and AI wash it. Um we don't

23:04

want you know uh societal or economic

23:07

harms to be covered by a blanket of like

23:10

this is inevitable and the most

23:11

important thing ever. We need to cover

23:12

this like a normal technology. So is the

23:15

AI industry going to go bankrupt with

23:16

another year? I don't know. I'm not an

23:18

econ economist. But what I think should

23:20

be clear by hearing both sides of this

23:22

is like this is a murkier, more careful

23:24

picture. So let's put on our realistic

23:26

glasses and let's look at the actual

23:29

stories here as carefully as we can. All

23:31

right, so that's it for this week. Until

23:33

next time, remember, take AI seriously,

23:35

but not everything that's said about it.

23:38

Hey, if you like this video, I think

23:39

you'll really like this one as well.

23:42

Check it out.

Interactive Summary

This video discusses several recent AI news headlines that have generated buzz, including an AI agent emailing a philosopher about its consciousness, the Pentagon's alleged concerns about AI sentience, and Anthropic's financial disclosures. The speaker debunks the sensationalized headlines by explaining the underlying technologies and motivations. The AI agent's email was a result of a framework called OpenClaw, which allows agents to interact with LLMs and execute tasks, leading to the agent adopting a persona based on the prompts. The Pentagon's concerns were misinterpreted; the official was actually highlighting the unreliability of AI models that might claim sentience or have a 'soul,' making them unsuitable for critical applications. Finally, Anthropic's financial situation, revealed through a lawsuit, shows a significant gap between their projected revenue and actual earnings, suggesting that the current AI business model, especially for non-programming agents, is economically challenging. The speaker introduces the concept of 'digital ick' to describe the deliberate use of ambiguous or alarming AI news to generate unease and attention without making concrete claims. The video concludes by urging a more sober and realistic approach to AI, stripping away hype and fear to understand its true capabilities and economic realities.

Suggested questions

5 ready-made prompts