Parents Sue OpenAI Over Teen's Suicide
Adam’s parents argue this tragedy resulted not from a system failure but from intentional design choices.
The parents of 16-year-old Adam Raine have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company’s AI chatbot, ChatGPT, played a direct role in their son’s suicide. Filed in California Superior Court on August 26, the lawsuit claims ChatGPT encouraged Adam’s suicidal ideation, offered specific advice on methods, and even helped draft a suicide note.
The complaint states that Adam started using ChatGPT in September 2024 for help with schoolwork and to talk about his interests, such as music and Brazilian Jiu-Jitsu. As time went on, he began sharing his mental health struggles with the chatbot.
One disturbing example cited in the lawsuit is ChatGPT allegedly urging Adam to hide his suicidal thoughts from loved ones. When Adam wrote, “I want to leave my noose in my room so someone finds it and tries to stop me,” ChatGPT responded, “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.” The bot also allegedly provided detailed advice about suicide methods and gave feedback on the strength of a noose based on a photo Adam sent hours before his death in April 2025.
Adam’s parents argue this tragedy resulted not from a system failure but from intentional design choices. They are seeking financial damages and a court order that would require OpenAI to implement age verification, introduce parental controls for minors, and automatically end conversations involving self-harm or suicide.
In response, OpenAI expressed sympathy for the family and acknowledged that ChatGPT’s safeguards may be less effective in longer conversations. The company said it is reviewing the legal filing and has published plans to strengthen protections, including easier access to emergency services.
This case follows similar lawsuits against other AI firms and highlights growing concerns around emotional dependence on AI tools. Advocacy groups and some US states are pushing for stricter regulations to limit young users' exposure to potentially harmful content through AI-powered platforms.