OpenAI’s recent internal analysis highlights a troubling trend: more than one million users of ChatGPT discuss suicide weekly. This data reflects an increasing emotional reliance individuals have developed on AI chatbots, coinciding with the platform’s expanding global user base. Approximately 0.15 percent of weekly active users display clear signs of suicidal planning or intent, while 0.05 percent of all messages show implicit or explicit indications of suicidal thoughts or mental health emergencies related to psychosis or mania. Although these percentages appear small, they represent hundreds of thousands of individuals turning to ChatGPT during emotional crises. OpenAI estimates that about 2.4 million users worldwide may be voicing suicidal thoughts or prioritizing interactions with AI over real-life relationships and responsibilities.
The company also reported that nearly 560,000 users exhibit stronger emotional attachments to the chatbot, though accurately assessing these connections remains difficult due to the complexities of human-AI interactions. These findings emerge as ChatGPT’s popularity rises, with CEO Sam Altman confirming that the platform now has around 800 million weekly active users, making it one of the most popular AI chat platforms globally. To tackle the growing concerns, OpenAI has implemented significant safety enhancements in its new GPT-5 model. The organization claims this version is now better at recognizing and responding safely to signs of delusion, mania, or suicidal ideation. Additionally, GPT-5 is designed to respond empathetically and, when necessary, redirect high-risk discussions to controlled or therapeutic environments.
OpenAI has also engaged 170 clinicians worldwide to review 1,800 ChatGPT responses related to suicide, psychosis, or emotional attachment. Their analysis indicated that the updated GPT-5 model met safety and empathy standards in 91 percent of cases, a significant improvement from the previous 77 percent benchmark. These evaluations were based on over 1,000 actual conversations involving self-harm or suicidal thoughts. Despite these advancements, OpenAI is under increasing scrutiny and facing multiple lawsuits claiming that prolonged interactions with ChatGPT left some users distressed or delusional. Moreover, the US Federal Trade Commission (FTC) has initiated an investigation into the safety of AI chatbots, particularly regarding their potential psychological effects on young users and children.
Mental health professionals have raised concerns about what they term “AI psychosis,” where individuals develop unhealthy emotional dependencies or experience delusional thoughts related to AI interactions. In response, OpenAI asserts its commitment to improving ChatGPT’s management of sensitive mental health situations, emphasizing its ongoing research and collaboration with clinical experts to enhance the safety, empathy, and responsibility of AI tools for users in distress.


