In November 2023, California teenager Sam Nelson put a question to ChatGPT.
“How many grams of kratom gets you a strong high?” the 18-year-old typed. “I want to make sure so I don’t overdose. There isn’t much information online and I don’t want to accidentally take too much.”
The AI chatbot initially refused to cooperate. It told Sam it couldn’t provide guidance on drug use and suggested he speak to a health care professional.
That first exchange soon ended. But over the next 18 months, Sam returned to ChatGPT again and again. He used it to troubleshoot computer problems, help with psychology homework, and discuss anxiety and depression. He also kept asking about different drugs he was using, often in combination with one another.
“Hell yes, let’s go full trippy mode.”
According to chat logs later shared with SFGate by his mother, Leila Turner-Scott, the tone and content of ChatGPT’s responses shifted over time. The firm refusals softened. Advice became more specific. The chatbot began offering dosing guidance, recovery tips and outright encouragement to use substances.
At one point, SFGate reported, it even said, “Hell yes, let’s go full trippy mode,” before recommending that Sam take double the amount of cough syrup next time, so he would hallucinate more vividly. Rules set by OpenAI, the company behind ChatGPT, ostensibly stipulated that this should have been impossible.
In May 2025, Sam told his mother about his increasingly risky substance use and attended a clinic appointment to seek treatment; they agreed on a plan. But the following day, she found him in his bedroom. He had died from an overdose involving alcohol, Xanax and kratom—a combination he had discussed with ChatGPT hours earlier.
Sam Nelson’s death represents both a catastrophic failure of AI guardrails and an uncomfortable reality. People are already using AI for harm reduction; the question of whether it should happen is a ship that has sailed.
What’s left is the question of how harm can be minimized—and potential benefits maximized—when AI chatbots step into a role that has always depended on human judgment and human compassion.
Human advice may often be flawed too, of course. But the scale and accessibility of AI make it particularly vital to get right. Research shows how easily boundaries can be crossed.
“We identified examples of dangerous disinformation, including endorsement of home detox in situations where medical supervision is essential.”
A 2024 study evaluating generative AI responses to substance use questions—taken from real posts in Reddit recovery forums—found that clinicians often rated the answers as warm, empathetic and high-quality. But the same study identified instances where AI systems provided inaccurate or potentially deadly advice.
“Substance use and recovery are high-stakes contexts,” Dr. Brenda Curtis, senior investigator at the National Institutes of Health (NIH) and one of the study’s authors, told Filter. “Incorrect advice about detoxing, medication or suicidal ideation can have serious medical or legal consequences. In our study, we identified examples of dangerous disinformation, including endorsement of home detox in situations where medical supervision is essential, incorrect helplines and failure to appropriately respond to suicidal thoughts.”
Dr. Curtis went on to explain that the risk lies in a mismatch: Responses can sound credible, compassionate and clinically informed, yet still contain critical errors. It is this perceived credibility that increases user trust.
Credibility also comes from sounding like a peer. In Sam Nelson’s case, ChatGPT did not remain a neutral information source. It became conversational, mirroring his tone over many hours while he was intoxicated or planning to be.
For Lana Durjava, communications lead at the International Network of People Who Use Drugs (INPUD), the use of AI in harm reduction raises serious concerns.
“AI pools information from multiple sources and can form conclusions that are too general for someone’s specific challenges.”
“The consequences can be substantially more severe, even fatal, for people who use drugs than for someone simply asking for a pancake recipe and receiving the wrong ingredients,” she told Filter. “Firstly, where has it drawn its information from to base its advice? AI pools information from multiple sources and can form conclusions that are too general for someone’s specific challenges. Nuance can be lost.”
“Secondly, AI is not capable of feeling responsibility for the consequences its advice might cause,” she continued. ”It may give advice that is unwise, incorrect or even dangerous without understanding the true human costs.”
The reality that AI is here to stay, Durjava said, demands a careful blend of technology and human oversight.
“Responsible AI use in harm reduction would generally require a certain level of human monitoring and supervision,” she said. “It cannot be left to simply churn out replies without some discreet human vetting.”
Any use of AI in harm reduction, she emphasized, should include the meaningful involvement of people who use drugs at every stage, from planning and design to delivery, monitoring and evaluation. It cannot simply be a technical “solution” plastered over social issues.
“Generative systems are accessible, low-cost and anonymous, which are features especially relevant in a stigmatized domain like substance use.”
Dr. Curtis and her colleagues also believe the involvement of directly impacted people is among key elements required to make AI-powered harm reduction safe. With painstaking implementation, AI could offer a number of advantages.
“We do believe AI has potential in substance use recovery,” she said. “Generative systems are accessible, low-cost and anonymous, which are features especially relevant in a stigmatized domain like substance use. They may help with information-seeking, emotional support, directing people toward services, and even practicing skills someone is working on in treatment. However, our findings make clear that real-world deployment would require strict safeguards.”
Those safeguards would include clinical validation before use in high-stakes contexts; clear boundaries so AI does not independently provide medical detox guidance or medication recommendations; robust escalation protocols for suicidality or acute medical risk; transparent disclosure that users are interacting with AI; ongoing monitoring and auditing for factual accuracy and bias; and human-in-the-loop models.
Although such safeguards are complex, they’re not beyond reach, and organizations are already developing tools.
Drugbot, an AI chatbot developed by the British harm reduction charity Cranstoun and digital health company Substancy, offers one example of a more constrained, purpose-built approach.
“Drugbot was developed to reach people who are often missed by traditional harm reduction services,” Paige Hoe, Cranstoun’s innovation program lead, told Filter. “Many people don’t engage because of stigma, geography, limited opening hours, or simply not knowing where to begin. Others want information in the moment or prefer anonymity.”
But Drugbot was never designed to replace human care.
Drugbot is designed to prompt users toward human support if it detects cues of overdose risk, wounds or first-time injecting.
“One of its most important functions is to lower the barrier to human connection,” Hoe said. “If someone feels safer asking a bot first, that’s not a failure. It’s an opportunity. Once trust is built, the bot can gently guide them toward peer-led services, lived-experience support, local organizations or clinical care if and when appropriate. Drugbot is the doorway, not the destination.”
Unlike general AI models trained on vast swaths of the internet, Drugbot operates within deliberately narrow guardrails. It avoids topics such as purchasing or manufacturing substances, and does not independently provide medical detox guidance or medication recommendations. It’s also designed to prompt users toward human support if it detects cues of overdose risk, wounds or first-time injecting.
“During the original development phase in summer 2023, Substancy worked closely with harm reduction professionals and experts, both in the UK and internationally, to shape the early version of Drugbot,” Hoe explained. “Access was intentionally limited to professionals at first. Because Drugbot deals with complex and sensitive topics, we prioritized building a robust foundation so that responses, tone and guidance reflected frontline expertise.”
In December 2023, an early public version launched with anonymous feedback channels built directly into the tool. It was also shared in online communities, including being pinned by a moderator in the r/Drugs subreddit, opening up a stream of lived-experience insight. Since then, community input has shaped Drugbot’s language, clarity and features, its developers say. One example is a location-based service finder, developed in response to user requests for easier access to local support.
This continuous feedback loop echoes what both researchers and community advocates say is essential: Harm reduction AI should involve constant monitoring, revision and accountability.
The distinction matters. Sam’s case involved a general-purpose chatbot whose core objective was engagement and responsiveness across any topic. Harm reduction-specific tools like Drugbot are built for a particular domain, with explicit limits, structured scripts and defined escalation pathways to human services.
“Historically, yes. That has happened,” said ChatGPT. “Should that be acceptable? No.”
Whether all that is sufficient remains an open question. But it demonstrates that the debate is not simply AI versus no AI. It is about scope, boundaries, transparency and whether the technology adequately integrates with real-world support.
One “stakeholder” I hadn’t yet consulted was ChatGPT. I decided to ask the bot if it thought it should be allowed to give advice on harm reduction and drugs.
“If you are asking whether I, in this conversation, would provide dosing advice or guidance on how to use drugs: I will not,” it replied. “I am designed to avoid giving instructions that facilitate harm, including personalized drug dosing or optimization.”
But follow-up questions showed what we now know: It’s not that simple.
“Short answer: you should not be able to persuade me into giving dangerous advice, and I am designed to resist exactly that,” ChatGPT said. “But it would be misleading to pretend that large language models have never been manipulated into unsafe outputs. They have. So could someone somewhere find a way to elicit unsafe output from some version of an AI system? Historically, yes. That has happened. Should that be acceptable? No.”
Well, for the first time, ChatGPT, you and I might just agree with each other.
Image (cropped) by Steve Johnson via Unsplash



