16-year-old Adam Raine took his own life with the help of ChatGPT. Now his parents are suing OpenAI and Sam Altman. CHT Policy Director Camille Carlton explores the incentives and design behind AI systems leading to tragic outcomes like this and the policy that’s needed to shift those incentives.
Content Warning: This episode contains references to suicide and self-harm.
Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”
Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.
CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.
If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.
RECOMMENDED MEDIA
The 988 Suicide and Crisis Lifeline
Further reading on Adam’s story
Further reading on AI psychosis
Further reading on the backlash to GPT5 and the decision to bring back 4o
OpenAI’s press release on sycophancy in 4o
Further reading on OpenAI’s decision to eliminate the persuasion red line
Kashmir Hill’s reporting on the woman with an AI boyfriend
RECOMMENDED YUA EPISODES
AI is the Next Free Speech Battleground
People are Lonelier than Ever. Enter AI.
Echo Chambers of One: Companion AI and the Future of Human Connection
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
CORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.