HomeVideos

She Used AI... Then She Lost Her Mind

Now Playing

She Used AI... Then She Lost Her Mind

Transcript

330 segments

0:00

I'm becoming increasingly concerned that

0:02

AI use may be a legitimate cause or risk

0:05

factor for psychosis, mental illness,

0:08

suicidality, and potentially

0:10

homicidality. So, I'm not saying this to

0:13

try to be alarmist. This is not sort of

0:15

a clickbait kind of video, but this is a

0:17

genuine concern that I have. We are

0:20

starting to see evidence that AI use may

0:23

be ve very similar to like excessive

0:26

substance use and the induction of

0:29

things like homicidality, suicidality,

0:31

and psychosis. And the reason for that

0:33

is because there's a new case study that

0:34

is incredibly scary. This is the case of

0:37

a 26-year-old woman who has no history

0:40

of psychosis or mania, has actually is

0:43

actively in mental health treatment who

0:46

uses an AI chatbot very extensively,

0:50

becomes actively psychotic, becomes

0:52

hospitalized, then gets given

0:54

anti-csychotic medication, stops using

0:57

the chatbot when she's in the hospital,

1:00

and then the psychosis resolves, and

1:02

then she leaves the hospital, stops the

1:05

antiscychotic medication, starts her

1:07

regular psych meds, starts using AI

1:09

chatbot again and then becomes psychotic

1:12

again and has to be hospitalized again.

1:14

So I want you all to understand

1:15

something. Okay. So when we look at

1:16

mental illness, there are some people

1:18

who are mentally ill and then that is a

1:21

risk factor for various kinds of

1:22

behaviors like homicidal ideiation which

1:25

is the desire to kill someone and then

1:27

as we treat that mental illness

1:28

hopefully that homicidal ideiation goes

1:30

down but then there's another category

1:32

of people for whom these features of

1:35

psychosis, suicidality and homicidality

1:38

these are the three most like basically

1:39

dangerous things in mental illness can

1:42

actually be induced by stuff. Right? So

1:44

when I was working in the emergency room

1:46

at Massachusetts General Hospital in um

1:48

Boston, Massachusetts, there was a big

1:51

problem with something called K2 use. So

1:53

K2 use is K2 is synthetic marijuana. And

1:56

basically people would use synthetic

1:57

marijuana and then like they would come

1:59

in, they would be psychotic, they would

2:00

attack people. Sometimes they were

2:02

suicidal. And when the K2 goes away,

2:04

right, when they like sober up, those

2:07

symptoms tend to resolve. And I'm

2:08

beginning to think, I know this is like

2:10

insane, but I'm beginning to really

2:12

wonder and be concerned about AI use

2:16

affecting our brains like a drug in the

2:20

sense that when we use AI a lot, it can

2:23

actually induce psychosis and then like

2:26

when we stop using it, it those things

2:29

go away. And here's what what a lot of,

2:31

you know, people who are uh lead in

2:33

leadership in AI companies will

2:35

basically make this argument, right? So

2:36

they've said this very publicly that oh

2:38

like it's a tragedy that these things

2:40

happen right that there are some

2:41

vulnerable people who when they use AI

2:45

will like become psychotic. So there's

2:47

this sort of idea right that like AI

2:49

doesn't cause this. It's like if if I

2:52

smoke a bunch of meth and I become

2:54

psychotic like that's the meth causing

2:57

the psychosis. Does that make sense? No

2:59

one's thinking like oh my god like AI is

3:01

causing the psychosis like causing it.

3:04

So, I think this case report is

3:05

incredibly scary because it really shows

3:08

a causal or temporal connection between

3:11

extensive AI use, the induction of

3:14

psychosis, when AI stops and we give the

3:17

appropriate medication, psychosis goes

3:19

away and then if you stop taking the

3:21

medication and start using the AI bot

3:22

again or when she did, it causes a

3:24

hospitalization again. Now, a lot of

3:26

people will say, you know, okay, once

3:28

again, like this person has risk

3:30

factors, right? So, we'll have some of

3:32

these public statements by AI leadership

3:34

that like, oh yeah, there's a vulnerable

3:36

population and these tragedies happen in

3:39

a vulnerable population when they use

3:42

the AI. It's not the AI causing it. It's

3:45

that these people are really close to

3:46

the edge of the cliff and when they use

3:49

AI, they get tipped over. But here's

3:51

what really scares me about that

3:53

statement. I want you all to think

3:55

critically for a moment about what

3:58

information

4:00

do you need to make the statement

4:03

vulnerable people who use AI that's

4:06

where when psychosis happens right like

4:08

it's not it's not that AI causes it

4:11

there's a pre-existing set of vulnerable

4:13

people in this case in in the case of

4:14

the 26-year-old she did have certain

4:16

risk factors for mental illness she I

4:18

think had a diagnosis of depression had

4:20

ADHD was using stimulants so all of

4:22

these things can potentially

4:24

risk factors for psychosis, but she had

4:26

no history of these things. So, I want

4:28

you all to think for a moment about what

4:30

information

4:36

can sometimes uncover pre-existing

4:39

psychosis. And it's a tragedy. Hey

4:40

y'all, if you're interested in applying

4:42

some of the principles that we share to

4:44

actually create change in your life,

4:46

check out Dr. K's guide to mental

4:47

health. There actually two sources for

4:49

anxiety. One is cognitive and one is

4:51

physiologic. For the majority of people,

4:54

reassurance becomes something that you

4:56

become dependent on. You're not really

4:58

dealing with the root of the anxiety and

5:00

it sort of becomes a self-fulfilling

5:02

prophecy where the more socially anxious

5:04

you become, the more awkward you appear

5:07

and then it kind of becomes this vicious

5:09

cycle. So check out the link in the bio

5:11

and start your journey today. So my

5:13

first question is for the people who are

5:15

like running these AI companies to know

5:17

that AI uncovers psychosis in vulnerable

5:20

people. In order to make that statement,

5:21

the first thing you have to be doing is

5:23

assessing risk factors for your users,

5:27

right? So, is someone at a company like

5:28

ChatGpt or Claude or Anthropic, is

5:31

someone actually measuring psychiatric

5:34

risk factors for all of their users?

5:37

Because unless you are measuring who has

5:39

risk factors and who doesn't have risk

5:41

factors, how can you make the statement

5:44

that people who are vulnerable are the

5:46

ones that AI is inducing psychosis in?

5:48

Right? So, you can't make that statement

5:50

unless you're measuring that stuff. And

5:51

that brings up two other concerns that I

5:53

have. The first is, are they measuring

5:55

it? Because that's kind of insane,

5:56

right? Are they like measuring your risk

5:58

factors? Are they collecting your

6:00

medical history in some way? I don't

6:02

think so. And so, if they're not

6:03

collecting that, how do they know that

6:05

only the vulnerable people are the

6:07

people who are at risk? And the second

6:09

thing about that statement, right? So,

6:10

vulnerable people who use AI can become

6:13

psychotic, suicidal, homicidal,

6:14

whatever. If you sort of say that, then

6:17

the second way, the second piece of

6:18

information that you need to be able to

6:20

say that is to be measuring those

6:22

outcomes in that population. If I were

6:25

to say this risk factor leads to this

6:27

outcome, I can only make that statement.

6:30

And if I say that the thing in the

6:31

middle, the AI is actually safe, the

6:33

only way I can make that statement is to

6:35

do a study where like I have people with

6:38

risk factors, people without risk

6:39

factors, give them both AI and then see

6:42

how the outcome is different. So once

6:44

again, are AI companies measuring

6:46

psychosis? Are they measuring suicidal

6:48

ideation? Are they measuring homicidal

6:50

ideation? And the answer is hopefully

6:52

not like sort of, right? Because that

6:54

means they're collecting health

6:56

information on their users, which I

6:57

don't think is part of what they're

6:59

supposed to be doing. People are making

7:01

these statements in AI leadership

7:03

without actually having the sufficient

7:07

information to make those statements

7:09

which then creates another problem which

7:11

is like how do they know that AI is

7:15

safe? How do they know that AI does not

7:18

induce suicidality, homicidality,

7:22

psychosis, depression, um unhealthy

7:25

attachment styles, social isolation? How

7:27

do they know what the safety effects of

7:29

AI are? Are they just reading the Wall

7:32

Street Journal and the New York Times

7:34

and stuff like that? Are basically

7:35

leadership at AI companies? Are they

7:37

reading media reports or are they doing

7:41

research? And then this is what's really

7:43

scary is like are they basing their

7:45

statements based on the few cases that

7:48

enter the media? And so this problem

7:50

seems to be like growing quite a bit.

7:53

And what really scares me is if they are

7:55

using media articles, how do they know

7:58

that there are not people who are

7:59

quietly delusional, quietly psychotic,

8:02

who are not killing anyone yet or not

8:04

committing suicide or things like that?

8:05

Like, how do they know that their

8:07

product is actually safe? And here's the

8:09

other thing that really scares me. What

8:11

do you have to do if you're running an

8:12

AI company? What do you have to do to

8:15

know that your product is safe? So,

8:17

here's what I've seen as someone who has

8:19

worked with entrepreneurs, who has

8:21

worked with startup founders. When a

8:23

company who has a product is faced with

8:28

solving all of this stuff that is

8:30

outside of their product, right? They're

8:32

an AI company. They are not equipped, do

8:37

not have the bandwidth or the funding to

8:40

actually do randomized control trials on

8:42

the safety of their product. They're not

8:45

regula regulated by the FDA FDA. Right?

8:48

This is what's so interesting about AI

8:50

companies is even though have they have

8:52

profound mental health impacts, they are

8:55

not formally in the system for a

8:58

valuation for mental health impacts. So

9:01

when I work with founders like this and

9:02

they are faced with this problem, I'm

9:04

just imagining like put yourself in the

9:05

shoes of someone who who is an AI

9:07

founder who suddenly people like people

9:09

are like just think about this how

9:11

[ __ ] insane this is. You you open up

9:14

the New York Times and you're like, "Oh

9:16

[ __ ] some guy who used my AI,

9:21

committed murder, killed their mother,

9:24

was delusional, and then committed

9:25

suicide." And you're like, "Well, [ __ ]

9:28

How am I supposed to solve that

9:29

problem?" When you are faced with

9:31

problems that are unsolvable or feel

9:34

unsolvable,

9:35

the best cope you can do is to do some

9:38

mental gymnastics, throw your hands up,

9:40

and say, "It's actually not my problem.

9:42

This is not the problem that exists.

9:44

It's way simpler than that. The simple

9:46

matter is that there are some people who

9:48

tragically are vulnerable to begin with.

9:50

It's not it's not my product that does

9:52

it, right? It's not my product. Like

9:54

cigarettes don't cause cancer. Like

9:55

that's insane, right? So there doctors

9:57

who talk about the health benefits of

9:59

cigarettes and how they make you

10:00

stimulated and they're like good for

10:02

your health and they give you a sense of

10:03

energy and like let's go. It's not my

10:05

product, not my problem. I'm not

10:08

cigarettes are not a medical device,

10:10

right? So I don't need to do studies on

10:11

them or things like that. There's no

10:12

like, oh, coincidentally people who are

10:14

high risk sometimes they get cancer. And

10:17

it sort of makes sense, right? Because

10:19

if you're someone who's like an AI CEO,

10:21

maybe you're a coder, you you look at

10:23

this medical problem and you don't know

10:25

how to solve it, right? You don't know

10:26

how to put together a clinical trial.

10:28

You don't have the money to put together

10:29

a clinical trial. So, what do you do?

10:31

You do this interesting, very basic

10:33

human sort of thing. I'm not saying that

10:34

any of these AI CEOs are specifically

10:36

doing this. I'm saying this is my past

10:38

experience working with an entrepreneur.

10:40

When you're faced with a problem that

10:42

you have no idea how to solve that could

10:46

result in findings that make it hard for

10:50

your company to make money like oh my

10:52

god dose high dose of AI induces

10:55

psychosis and suicidality. My business

10:57

model is to get people to use my product

10:59

more. We want people smoking more.

11:01

Right? And so when you're faced with

11:03

that problem you kind of throw up your

11:04

hands in the air and you kind of say

11:05

look this is not my area of expertise.

11:08

Sometimes [ __ ] happens to people. People

11:10

are psychologically vulnerable. And it I

11:13

I'm I'm not talking specifically about

11:14

Open AI, but I I think like in some ways

11:17

I think Open AI is doing a really good

11:18

job. So they released this great blog

11:20

post where they're sort of talking about

11:21

taking this pretty seriously. I know a

11:23

lot of users are actually complaining

11:25

about the most recent version of Chat

11:27

GBT because it's not as manipulatable.

11:29

It's not as sickopantic. Right? So I'm

11:31

not saying that these people are bad.

11:33

The reason I'm making this video is

11:35

because there is a small chance or large

11:38

chance that's up to you. We've presented

11:41

a lot of different evidence. You can

11:42

watch this other video that if you are

11:44

using AI, it's dangerous to you. I'm not

11:46

here to [ __ ] on AI CEOs or AI companies.

11:49

The reason I make I'm a psychiatrist and

11:52

this platform is about your health. The

11:55

reason I'm making this video is because

11:56

I want yall to understand that 10 years

12:00

from now, 15 years from now, 20 years

12:02

from now, hopefully we'll have a clear

12:03

answer that AI is dangerous, safe,

12:06

whatever. But the problem is in the

12:07

meantime you need to be careful.

Interactive Summary

The video discusses the potential risks of AI use, particularly concerning mental health. The speaker expresses concern that AI use could be a risk factor for psychosis, mental illness, suicidality, and homicidality. A case study of a 26-year-old woman is presented, who experienced psychosis after extensive AI chatbot use, which resolved when she stopped using it. The speaker draws parallels to substance abuse, like K2 use, which can induce similar symptoms. The video critiques AI companies' claims that AI doesn't cause these issues but rather uncovers pre-existing vulnerabilities. The speaker argues that without proper risk assessment and outcome measurement by AI companies, these claims are unsubstantiated. The video highlights the lack of regulation and formal evaluation for AI's mental health impacts, comparing it to the FDA's role in medical devices. It suggests that AI founders, facing complex problems with no easy solutions, may resort to downplaying the risks or attributing them solely to user vulnerability. The speaker emphasizes that the long-term safety effects of AI are still unknown and urges caution.

Suggested questions

6 ready-made prompts

Recently Distilled

Videos recently processed by our community