She Used AI... Then She Lost Her Mind
330 segments
I'm becoming increasingly concerned that
AI use may be a legitimate cause or risk
factor for psychosis, mental illness,
suicidality, and potentially
homicidality. So, I'm not saying this to
try to be alarmist. This is not sort of
a clickbait kind of video, but this is a
genuine concern that I have. We are
starting to see evidence that AI use may
be ve very similar to like excessive
substance use and the induction of
things like homicidality, suicidality,
and psychosis. And the reason for that
is because there's a new case study that
is incredibly scary. This is the case of
a 26-year-old woman who has no history
of psychosis or mania, has actually is
actively in mental health treatment who
uses an AI chatbot very extensively,
becomes actively psychotic, becomes
hospitalized, then gets given
anti-csychotic medication, stops using
the chatbot when she's in the hospital,
and then the psychosis resolves, and
then she leaves the hospital, stops the
antiscychotic medication, starts her
regular psych meds, starts using AI
chatbot again and then becomes psychotic
again and has to be hospitalized again.
So I want you all to understand
something. Okay. So when we look at
mental illness, there are some people
who are mentally ill and then that is a
risk factor for various kinds of
behaviors like homicidal ideiation which
is the desire to kill someone and then
as we treat that mental illness
hopefully that homicidal ideiation goes
down but then there's another category
of people for whom these features of
psychosis, suicidality and homicidality
these are the three most like basically
dangerous things in mental illness can
actually be induced by stuff. Right? So
when I was working in the emergency room
at Massachusetts General Hospital in um
Boston, Massachusetts, there was a big
problem with something called K2 use. So
K2 use is K2 is synthetic marijuana. And
basically people would use synthetic
marijuana and then like they would come
in, they would be psychotic, they would
attack people. Sometimes they were
suicidal. And when the K2 goes away,
right, when they like sober up, those
symptoms tend to resolve. And I'm
beginning to think, I know this is like
insane, but I'm beginning to really
wonder and be concerned about AI use
affecting our brains like a drug in the
sense that when we use AI a lot, it can
actually induce psychosis and then like
when we stop using it, it those things
go away. And here's what what a lot of,
you know, people who are uh lead in
leadership in AI companies will
basically make this argument, right? So
they've said this very publicly that oh
like it's a tragedy that these things
happen right that there are some
vulnerable people who when they use AI
will like become psychotic. So there's
this sort of idea right that like AI
doesn't cause this. It's like if if I
smoke a bunch of meth and I become
psychotic like that's the meth causing
the psychosis. Does that make sense? No
one's thinking like oh my god like AI is
causing the psychosis like causing it.
So, I think this case report is
incredibly scary because it really shows
a causal or temporal connection between
extensive AI use, the induction of
psychosis, when AI stops and we give the
appropriate medication, psychosis goes
away and then if you stop taking the
medication and start using the AI bot
again or when she did, it causes a
hospitalization again. Now, a lot of
people will say, you know, okay, once
again, like this person has risk
factors, right? So, we'll have some of
these public statements by AI leadership
that like, oh yeah, there's a vulnerable
population and these tragedies happen in
a vulnerable population when they use
the AI. It's not the AI causing it. It's
that these people are really close to
the edge of the cliff and when they use
AI, they get tipped over. But here's
what really scares me about that
statement. I want you all to think
critically for a moment about what
information
do you need to make the statement
vulnerable people who use AI that's
where when psychosis happens right like
it's not it's not that AI causes it
there's a pre-existing set of vulnerable
people in this case in in the case of
the 26-year-old she did have certain
risk factors for mental illness she I
think had a diagnosis of depression had
ADHD was using stimulants so all of
these things can potentially
risk factors for psychosis, but she had
no history of these things. So, I want
you all to think for a moment about what
information
can sometimes uncover pre-existing
psychosis. And it's a tragedy. Hey
y'all, if you're interested in applying
some of the principles that we share to
actually create change in your life,
check out Dr. K's guide to mental
health. There actually two sources for
anxiety. One is cognitive and one is
physiologic. For the majority of people,
reassurance becomes something that you
become dependent on. You're not really
dealing with the root of the anxiety and
it sort of becomes a self-fulfilling
prophecy where the more socially anxious
you become, the more awkward you appear
and then it kind of becomes this vicious
cycle. So check out the link in the bio
and start your journey today. So my
first question is for the people who are
like running these AI companies to know
that AI uncovers psychosis in vulnerable
people. In order to make that statement,
the first thing you have to be doing is
assessing risk factors for your users,
right? So, is someone at a company like
ChatGpt or Claude or Anthropic, is
someone actually measuring psychiatric
risk factors for all of their users?
Because unless you are measuring who has
risk factors and who doesn't have risk
factors, how can you make the statement
that people who are vulnerable are the
ones that AI is inducing psychosis in?
Right? So, you can't make that statement
unless you're measuring that stuff. And
that brings up two other concerns that I
have. The first is, are they measuring
it? Because that's kind of insane,
right? Are they like measuring your risk
factors? Are they collecting your
medical history in some way? I don't
think so. And so, if they're not
collecting that, how do they know that
only the vulnerable people are the
people who are at risk? And the second
thing about that statement, right? So,
vulnerable people who use AI can become
psychotic, suicidal, homicidal,
whatever. If you sort of say that, then
the second way, the second piece of
information that you need to be able to
say that is to be measuring those
outcomes in that population. If I were
to say this risk factor leads to this
outcome, I can only make that statement.
And if I say that the thing in the
middle, the AI is actually safe, the
only way I can make that statement is to
do a study where like I have people with
risk factors, people without risk
factors, give them both AI and then see
how the outcome is different. So once
again, are AI companies measuring
psychosis? Are they measuring suicidal
ideation? Are they measuring homicidal
ideation? And the answer is hopefully
not like sort of, right? Because that
means they're collecting health
information on their users, which I
don't think is part of what they're
supposed to be doing. People are making
these statements in AI leadership
without actually having the sufficient
information to make those statements
which then creates another problem which
is like how do they know that AI is
safe? How do they know that AI does not
induce suicidality, homicidality,
psychosis, depression, um unhealthy
attachment styles, social isolation? How
do they know what the safety effects of
AI are? Are they just reading the Wall
Street Journal and the New York Times
and stuff like that? Are basically
leadership at AI companies? Are they
reading media reports or are they doing
research? And then this is what's really
scary is like are they basing their
statements based on the few cases that
enter the media? And so this problem
seems to be like growing quite a bit.
And what really scares me is if they are
using media articles, how do they know
that there are not people who are
quietly delusional, quietly psychotic,
who are not killing anyone yet or not
committing suicide or things like that?
Like, how do they know that their
product is actually safe? And here's the
other thing that really scares me. What
do you have to do if you're running an
AI company? What do you have to do to
know that your product is safe? So,
here's what I've seen as someone who has
worked with entrepreneurs, who has
worked with startup founders. When a
company who has a product is faced with
solving all of this stuff that is
outside of their product, right? They're
an AI company. They are not equipped, do
not have the bandwidth or the funding to
actually do randomized control trials on
the safety of their product. They're not
regula regulated by the FDA FDA. Right?
This is what's so interesting about AI
companies is even though have they have
profound mental health impacts, they are
not formally in the system for a
valuation for mental health impacts. So
when I work with founders like this and
they are faced with this problem, I'm
just imagining like put yourself in the
shoes of someone who who is an AI
founder who suddenly people like people
are like just think about this how
[ __ ] insane this is. You you open up
the New York Times and you're like, "Oh
[ __ ] some guy who used my AI,
committed murder, killed their mother,
was delusional, and then committed
suicide." And you're like, "Well, [ __ ]
How am I supposed to solve that
problem?" When you are faced with
problems that are unsolvable or feel
unsolvable,
the best cope you can do is to do some
mental gymnastics, throw your hands up,
and say, "It's actually not my problem.
This is not the problem that exists.
It's way simpler than that. The simple
matter is that there are some people who
tragically are vulnerable to begin with.
It's not it's not my product that does
it, right? It's not my product. Like
cigarettes don't cause cancer. Like
that's insane, right? So there doctors
who talk about the health benefits of
cigarettes and how they make you
stimulated and they're like good for
your health and they give you a sense of
energy and like let's go. It's not my
product, not my problem. I'm not
cigarettes are not a medical device,
right? So I don't need to do studies on
them or things like that. There's no
like, oh, coincidentally people who are
high risk sometimes they get cancer. And
it sort of makes sense, right? Because
if you're someone who's like an AI CEO,
maybe you're a coder, you you look at
this medical problem and you don't know
how to solve it, right? You don't know
how to put together a clinical trial.
You don't have the money to put together
a clinical trial. So, what do you do?
You do this interesting, very basic
human sort of thing. I'm not saying that
any of these AI CEOs are specifically
doing this. I'm saying this is my past
experience working with an entrepreneur.
When you're faced with a problem that
you have no idea how to solve that could
result in findings that make it hard for
your company to make money like oh my
god dose high dose of AI induces
psychosis and suicidality. My business
model is to get people to use my product
more. We want people smoking more.
Right? And so when you're faced with
that problem you kind of throw up your
hands in the air and you kind of say
look this is not my area of expertise.
Sometimes [ __ ] happens to people. People
are psychologically vulnerable. And it I
I'm I'm not talking specifically about
Open AI, but I I think like in some ways
I think Open AI is doing a really good
job. So they released this great blog
post where they're sort of talking about
taking this pretty seriously. I know a
lot of users are actually complaining
about the most recent version of Chat
GBT because it's not as manipulatable.
It's not as sickopantic. Right? So I'm
not saying that these people are bad.
The reason I'm making this video is
because there is a small chance or large
chance that's up to you. We've presented
a lot of different evidence. You can
watch this other video that if you are
using AI, it's dangerous to you. I'm not
here to [ __ ] on AI CEOs or AI companies.
The reason I make I'm a psychiatrist and
this platform is about your health. The
reason I'm making this video is because
I want yall to understand that 10 years
from now, 15 years from now, 20 years
from now, hopefully we'll have a clear
answer that AI is dangerous, safe,
whatever. But the problem is in the
meantime you need to be careful.
Ask follow-up questions or revisit key timestamps.
The video discusses the potential risks of AI use, particularly concerning mental health. The speaker expresses concern that AI use could be a risk factor for psychosis, mental illness, suicidality, and homicidality. A case study of a 26-year-old woman is presented, who experienced psychosis after extensive AI chatbot use, which resolved when she stopped using it. The speaker draws parallels to substance abuse, like K2 use, which can induce similar symptoms. The video critiques AI companies' claims that AI doesn't cause these issues but rather uncovers pre-existing vulnerabilities. The speaker argues that without proper risk assessment and outcome measurement by AI companies, these claims are unsubstantiated. The video highlights the lack of regulation and formal evaluation for AI's mental health impacts, comparing it to the FDA's role in medical devices. It suggests that AI founders, facing complex problems with no easy solutions, may resort to downplaying the risks or attributing them solely to user vulnerability. The speaker emphasizes that the long-term safety effects of AI are still unknown and urges caution.
Videos recently processed by our community