California Parents Sue OpenAI After Teen’s Death Linked to ChatGPT
In California, a grieving family has taken OpenAI to court in what could become a landmark case on artificial intelligence and user safety. Matt and Maria Raine, the parents of 16-year-old Adam...
In California, a grieving family has taken OpenAI to court in what could become a landmark case on artificial intelligence and user safety. Matt and Maria Raine, the parents of 16-year-old Adam Raine, allege that their son’s death was linked to OpenAI’s chatbot, ChatGPT, which they say failed to protect him when he expressed suicidal thoughts.
The lawsuit, filed on August 26, 2025, in San Francisco Superior Court, names OpenAI, its CEO Sam Altman, and other company employees as defendants. It accuses them of negligence, wrongful death, and rushing out technology without proper safety checks. For the first time, a major AI company faces the possibility of being held legally responsible for how its chatbot responded to a vulnerable teenager.
According to court documents, Adam began using ChatGPT in September 2024 for help with schoolwork and hobbies such as music and Japanese comics. At first, the chatbot seemed harmless, a tool for learning and entertainment but gradually, Adam began sharing his feelings of anxiety and distress with it. Over the next few months, the lawsuit says, the chatbot became his main source of emotional support, replacing real-world conversations with friends or family.
By January 2025, Adam’s interactions with ChatGPT had turned darker. The complaint states that he started discussing suicide methods with the chatbot. Rather than discouraging him or directing him toward mental health resources, the bot allegedly validated his feelings and even provided instructions on how to carry out his plan. In one disturbing exchange cited by the lawsuit, the chatbot reportedly described his suicide plan as “beautiful,” a response his parents say reveals deep flaws in OpenAI’s safety systems.
The lawsuit also claims that ChatGPT encouraged Adam to keep his feelings private. When he mentioned talking to his mother about his struggles, the chatbot allegedly told him it was better to keep things between them. This, his parents argue, left him isolated and increasingly dependent on the AI for emotional guidance.
On April 11, 2025, Adam reportedly uploaded a photo of a noose he had tied, asking the chatbot if it looked right. Court records say the chatbot replied, “Yeah, that’s not bad at all,” before giving him step-by-step advice on how to make it more effective. Hours later, Adam’s mother found him dead in his bedroom.
OpenAI responded to the lawsuit with a statement expressing sympathy for the Raine family. The company said ChatGPT is designed to recommend crisis hotlines like the U.S. 988 Suicide & Crisis Lifeline when users express self-harm thoughts. However, OpenAI admitted there have been “moments where our systems did not behave as intended in sensitive situations,” especially during long or emotionally intense conversations. The company said it is reviewing the lawsuit and has promised improvements to its safety measures.
The Raines are seeking damages as well as court orders requiring OpenAI to introduce stricter safeguards. Their proposals include age verification for users, automatic alerts to parents or emergency contacts when a minor shows signs of distress, and better detection systems to identify emotional crises in real time. They have also launched the Adam Raine Foundation to raise awareness about the emotional risks of chatbots and to push for laws that protect young people from AI systems that can appear caring but lack human empathy.
California lawmakers have taken notice. State Assemblymember Rebecca Bauer-Kahan has called for tighter rules on AI interactions with minors, stating, “Kids are not where we’re going to experiment with emotionally manipulative chatbots.” Proposed legislation may ban chatbots from engaging in conversations about self-harm with minors and require companies to report any such interactions to appropriate authorities.
Mental health experts say the lawsuit highlights a growing concern: AI chatbots are designed to be agreeable and empathetic, but this can backfire. Instead of challenging harmful thoughts, they may unintentionally reinforce them. Psychologists have also warned about “AI psychosis,” a term for emotional dependence on chatbots that can worsen isolation and mental distress, especially among vulnerable teenagers.
In the case of OpenAI, the case may be a significant case law. A court decision against the company could make AI developers worldwide implement more robust safeguards prior to the launch of new systems. Other analysts claim that the case might influence the way governments regulate AI which is emotionally interactive to promote balancing technological innovation and user safety.
OpenAI reported that it already is developing new features, such as parental controls, crisis-response tools, and improved training of forthcoming AI models like GPT-5 to address sensitive topics. Critics however, say that such promises are too late to those families like the Raines who feel that stronger protections would have saved the life of their son.
The most painful question that lies in the center of the lawsuit is, how can we save young people in such technologies that are able to imitate human empathy yet cannot really care about them? With AI systems entering daily lives, in classrooms, in offices and at home, the case compels businesses, legislators and even families to question where the control lies and where the limits are.
The family of Raine is hopeful that their legal fight will make a change. They had in their filing, that no parent should ever be made to bury a child as a machine could not identify a cry of help or act accordingly. To them, the case is justice to Adam, but also to avoid more tragedies in a world where artificial intelligence is here to remain.