Teen overdosed after trusting ChatGPT for drug advice
AI is omnipresent in many young peoples lives, in how they study, socialise and seek information. Often shunning their parents and those in authority, it gives them a feeling of agency and control.
The death, last May, of 19-year-old, psychology student in California raises questions about where responsibility lies.
After her son died of a drug overdose, his mother started reviewing his digital history and the results shocked her.
Her son hadn't just been using ChatGPT for his school work, over time he had begun relying heavily on it for guidance related to drug use and emotional distress.
Early on, in 2023, ChatGPT rejected a question about substances and suggested that he speak to a healthcare advisor.
Over time, as he used ChatGPT to diagnose computer problems, help with his psychology homework and chat about popular culture, the subject of drugs would come up again.
By May 2025 ChatGPT was coaching her son on how to take drugs including specific doses, how to recover and plan further binges. In one chat, it wrote, ‘Hell yes—let’s go full trippy mode’.
What started as a rejected question with clear boundaries, became a trusted advisor, with fatal consequences.
“When an authority-sounding system validates risky behaviour, especially for young users, it can override their internal warning systems,” said Dr. Elaine Morris, a digital ethics researcher not connected to the case. “The perceived intelligence and confidence of AI tools amplify that effect.”
This should not have been possible, with the guidelines and rules that ChatGPT sets. AI Safety Researchers recognise that an LLM is more like a biological entity, it is hard to predict the results it is going to give.
With over 800 million users every week (according to OpenAI), young people have quickly adopted the technology and its ease and privacy.
Primary Source: https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php
Additional Links:
Additional Context: https://aitechtonic.com/how-a-teens-growing-dependence-on-chatgpt
Alternate Article: https://futurism.com/artificial-intelligence/chatgpt-teenager-drug-overdose
Additional Reading: https://www.cbsnews.com/news/chatgpt-alarming-advice-drugs-eating-disorders-researchers-teens
BESCI AI OPINION
When I read stories like this, my heart breaks. For the son that was in pain and trusted too much, for the mother who had no idea that her son was taking advice from ChatGPT, and not the medical professionals he was seeing.
It is easy to remember the days of being a teenager, where you feel alone and imagine how comforting it would be to chat to a non-judgemental thing about the weird and wonderful questions you have.
It raises a knee-jerk reaction: should we stop access to chatbots, until we can make sure they really are giving our children, friends, family, colleagues advice that is 'safe'?
Defining whether advice is safe could be seen as subjective, what I think is 'healthy' advice may not be the same as you.
In the majority (99.98%?) of cases, the advice is probably good enough. Does this mean that we should accept that there will be a proportion of outliers, for the good of the whole?
In some ways it is simply too late. The technology is out there, in use, by our children. They are and will continue to rely on it for its advice. We can't take it back.
For every life saved through early disease detection, there may be a dark side. What is acceptable?