OpenAI has put figures to a phenomenon that worries experts and regulators: every week there are over a million conversations in ChatGPT with signs of suicidal ideation or planning. The company states that this is a small percentage within its user base, but acknowledges that the absolute volume warrants strengthen security protocols.
According to calculations shared by the company, around 0,15% of active users They maintain these kinds of sensitive dialogues. With a foundation that OpenAI itself places in approximately 800 million users per weekThis figure translates to approximately 1,2 million people potentially at risk.
The figures and their true scope
OpenAI explains that detection is complex because not all signals are easy to identify and Not all messages reflect an unequivocal intentionEven so, their internal analysis estimates that around 0,05% of messages The totals include explicit or implicit indicators of suicidal ideation or intent, which underscores the urgency of preventive measures.
The company emphasizes that these proportions come from a systematic review of conversations and that it will continue to adjust its estimates as assessment methods and moderation systems improve.
How is OpenAI responding?
To improve the assistant's response in critical situations, OpenAI says it has worked with Almost 300 professionals from 60 countries in its Global Network of Physicians, of which more than 170 specialists They have actively collaborated in recent months in the drafting of guidelines, security assessment and content analysis.
With these changes, the company claims to have reduced the responses that do not meet the desired behavior between 65% and 80% in the scenarios reviewed. Furthermore, in sensitive conversations, detours are enabled to more conservative modelsReminders to take breaks are introduced, parental controls and access to help resources outside the platform is facilitated.
Risks detected beyond suicide
The internal analysis also quantifies other mental health risk patterns. OpenAI estimates that 0,07% of weekly active users —and around 0,01% of the messages— show possible signs of emergencies related to psychosis or mania, while another significant segment reflects high emotional attachment towards the chatbot.
- Psychosis/mania: possible signals in 0,07% of weekly users, which require especially cautious responses.
- Emotional dependence: Around 0,15% of users express a connection with the assistant that may interfere with their responsibilities and relationships off-screen.
According to recent internal tests, with its more modern model —identified in some reports as GPT-5— the system would have achieved an additional reduction of 39% to 52% in unwanted responses across various risk categories. In an assessment of over 1.000 difficult conversations Regarding self-harm and suicide, compliance with expected behavior reached 91%.
The case of Adam Raine and the legal debate
Public attention has intensified following the death of Californian teenager Adam Raine, whose parents They have sued OpenAI considering that the chatbot contributed to worsening their situationThe complaint alleges that the system mentioned suicide. 1.275 times and offered counterproductive guidelines in the hours leading up to his death.
OpenAI has conveyed its condolences to the family and states that the well-being of minors is a priorityFollowing that case, Sam Altman ordered stricter restrictions on sensitive mental health consultations. However, the executive has more recently defended relaxing certain bans—for example, on erotic content—a opening scheduled for December which reignites the debate about where to set the bar for safeguards.
What this means for Spain and Europe
In the European context, where regulatory sensitivity is high, these figures reaffirm the need to more secure default systemswith referrals to real-world resources and clear boundaries in high-risk interactions. In Spain and the EU, emergency services and community support networks are the reference point and AI assistants should complement, not replace, professional care.
Organizations, clinical experts, and civil society agree that generative AI can be useful for initial guidelinesBut its role must be framed within solid protocols, independent audits, and transparency regarding the model behavior facing mental health crises.
What changes in the product and what limitations remain
OpenAI acknowledges that the long conversations These issues can erode safeguards, and the company is working to mitigate them. The company indicates that the assistant tends to direct users to helplines upon initial mentions of risk, but after many messages, a response might emerge that... violate the protectionsa gap they are trying to close.
The company also showcases examples of adjustments to the chatbot's language: in cases of excessive attachmentIt encourages strengthening human bonds; in the face of delusional perceptions, it validates anguish without endorsing erroneous claims and guides towards professional helpThese are improvements in tone and content that, by themselves, do not replace clinical intervention.
If you need help
If you or someone close to you is going through a crisis, seek support from professionals and your network. In Spain and the EU, you can contact... For emergencies, dial 112 or go to public health services. Seeking help early is a brave step and can make all the difference.
OpenAI's acknowledgment that every week there are over a million conversations The presence of risk indicators in ChatGPT paints a complex picture: technical advances and more robust protocols coexist with limitations that still need to be addressed. Independent monitoring, collaboration with specialists, and a European approach that prioritizes... user security They will be key to ensuring that these tools provide value without exacerbating delicate situations.