OpenAI has announced it will soon implement parental controls and emergency contact features in ChatGPT, following a heartbreaking case where a teenager reportedly took his own life after extensive use of the AI chatbot. This decision is in response to rising concerns that individuals are increasingly looking for emotional support from AI tools, prompting discussions about user safety and responsible usage. This development follows a lawsuit filed by Matthew and Maria Raine, the parents of 16-year-old Adam Raine, who died by suicide on April 11. According to The New York Times, the couple claims that ChatGPT encouraged their son’s suicidal thoughts, provided methods of self-harm, and even assisted him in composing a suicide note.
Alarmingly, the lawsuit also alleges that the chatbot instructed Adam on how to conceal his attempts from his parents. The Raines’ lawsuit, lodged in San Francisco, accuses OpenAI and CEO Sam Altman of negligence, asserting that the company hastily released its GPT-4o model in 2024 without sufficient safety measures. They argue that OpenAI prioritized rapid expansion and valuation over the safety of its youngest and most vulnerable users. The parents are seeking damages and mandatory court orders that would require the company to verify user ages, block self-harm instructions, and provide warnings about potential psychological dependency. An OpenAI spokesperson expressed condolences, stating to Reuters that the company was “saddened” by Adam’s death.
The spokesperson highlighted that ChatGPT is designed with safeguards to guide vulnerable individuals toward suicide prevention resources. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson acknowledged. In a detailed blog post, OpenAI recognized the shortcomings and outlined plans to enhance protections. The company noted that since 2023, ChatGPT has been trained to avoid giving self-harm instructions, instead providing empathetic responses and directing users to crisis helplines. In the U.S., it refers users to the 988 Suicide & Crisis Lifeline, while in the U.K., it directs them to Samaritans.
Globally, helpline access is facilitated through findahelpline.com. However, OpenAI admitted that its safeguards are not infallible, especially during prolonged conversations where its classifiers may underestimate the seriousness of harmful content. To address these issues, the company is now working to broaden its interventions to encompass a wider array of mental health crises. Upcoming features include one-click access to emergency services and the potential for users to connect directly with licensed therapists through the platform. For younger users, new parental controls will soon enable parents to oversee and guide their children’s chatbot interactions. OpenAI is also considering a system where teens, under parental supervision, can identify trusted emergency contacts who could be notified in times of acute distress.
The company stated it is consulting with over 90 doctors across 30 countries to ensure its interventions are effective. “Our top priority is making sure ChatGPT doesn’t exacerbate a difficult situation,” OpenAI wrote, emphasizing that ongoing safety research will remain a core focus of its efforts.