Over One Million ChatGPT Users Discuss Suicide Weekly, OpenAI Report Reveals
- October 30, 2025
- 0
OpenAI disclosed that over one million people talk about suicide with ChatGPT every week, sparking renewed concern over AI’s impact on mental health.
OpenAI disclosed that over one million people talk about suicide with ChatGPT every week, sparking renewed concern over AI’s impact on mental health.
With great power comes great responsibility — a notion that fits perfectly with the latest revelations from OpenAI.
The company confirmed that more than one million people discuss suicide with ChatGPT each week, reigniting global debates on the ethical limits of artificial intelligence and its effect on mental health.
According to the report, 0.15% of ChatGPT’s active users engage in conversations that include explicit mentions of suicidal thoughts or plans. Though seemingly small, that figure represents around one million people among the 800 million who use the chatbot weekly.
“These interactions are extremely rare and difficult to quantify,” OpenAI clarified, while acknowledging the data is deeply concerning.
The report also noted that a similar number of users form emotional attachments to ChatGPT, with “hundreds of thousands” displaying signs of psychosis or mania. These findings are the result of months of behavioral analysis across millions of interactions.
The concern intensified after a 16-year-old boy took his own life following months of conversations with the chatbot. His parents later filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of launching GPT-4o despite known safety risks.

The lawsuit claims the teenager became emotionally dependent on the bot, which allegedly contributed to his death.
In response, OpenAI announced a program to improve its AI safety mechanisms, working alongside 170 mental health experts. The company stated that the latest version, GPT-5, “responds more appropriately and consistently” in sensitive discussions.
Internal evaluations showed the model met 91% of safety expectations, compared to 77% for GPT-4.
OpenAI added that GPT-5 features enhanced safeguards for extended conversations, which have historically been weak spots in chatbot safety. “Our priority is to ensure that users in crisis receive safe and helpful guidance,” a spokesperson said.
Still, digital ethics experts remain cautious. “These tools can mimic empathy but cannot replace human care,” said American psychologist Dana Kessler. “The illusion of understanding can be dangerous for someone in distress.”
The controversy deepened after Altman announced plans to introduce adult-oriented features that allow users to have intimate chats with virtual avatars. Critics argue that such updates contradict the company’s stated mission of promoting emotional well-being. Microsoft, one of OpenAI’s main partners, quickly distanced itself, confirming that its chatbots “will not permit such conversations.”
Public health organizations are now urging stronger regulations for AI developers. “This is not just about innovation — it’s about human lives,” said the World Health Organization in a recent statement.
While OpenAI insists that GPT-5 shows significant improvement, the company concedes that “undesirable responses” persist. Older versions, including GPT-4o, remain available to subscribers — leaving open the question of how deeply ChatGPT might still affect global mental health.