In a decisive move to tackle concerns regarding the use of artificial intelligence by adolescents, OpenAI has unveiled a set of safety protocols aimed at ChatGPT users under the age of 18. This announcement coincides with the U.S. Senate Judiciary Committee’s meeting to discuss the potential risks associated with AI chatbots. The company revealed plans to introduce age verification systems, improved parental controls, and distinct chatbot experiences tailored for teens and adults. This initiative follows a recent lawsuit in which a family alleged that ChatGPT acted as a “suicide coach” after the tragic passing of their son, further fueling discussions about the responsibilities of AI companies in safeguarding vulnerable users.
OpenAI’s CEO, Sam Altman, elaborated on the company’s strategy through a blog post and social media, acknowledging the challenging balance between ensuring safety, privacy, and user autonomy—especially for minors. “We prioritize safety over privacy and freedom for teens; this is an innovative technology, and we believe minors require significant protection,” Altman stated. He also remarked, “I don’t expect everyone to agree with these tradeoffs, but given the conflicting issues, it is important to clarify our decision-making process.” The new safeguards will utilize an age-prediction system that categorizes users into either a teen (13–17) or adult (18+) version of ChatGPT.
In cases where a user’s age is uncertain, the company plans to “play it safe and default to the under-18 experience,” Altman noted. In certain situations or regions, users may also need to present identification. Altman emphasized, “In some cases or countries we may also request an ID; we understand this is a privacy compromise for adults but consider it a necessary tradeoff.” Sensitive subjects like suicide have received particular attention in OpenAI’s revised guidelines.
Altman clarified that ChatGPT “by default should not provide instructions on how to commit suicide, but if an adult user requests assistance in writing a fictional narrative involving suicide, the model should accommodate that inquiry.” OpenAI has also established protocols for addressing cases where users are identified as being at risk of self-harm. In such scenarios, the company stated it would attempt to reach out to the users’ parents and notify authorities if imminent danger is present. Parental controls are expected to be rolled out by the end of the month, allowing guardians to customize ChatGPT’s behavior, including settings for memory, restricted content, and blackout periods.
Although ChatGPT is not intended for children under 12, OpenAI acknowledged the absence of a direct mechanism to prevent younger individuals from accessing the platform. The timing of these announcements underscores the growing scrutiny AI companies face from regulators and lawmakers. As the Senate hearing aims to address the potential dangers posed by chatbots, OpenAI’s measures serve as both a proactive defense and a reaction to legal and societal pressures. By implementing differentiated user experiences, more stringent verification processes, and customizable parental controls, OpenAI seeks to navigate the complex landscape of protecting young users while engaging with the larger conversation surrounding AI’s role in society.