The focus returns to ChatGPT and mental health following a lawsuit filed in California by the parents of a teenager who took his own life. The case, which points to OpenAI and its CEO, Sam Altman, highlights the extent to which chatbots must be prepared to manage high-risk crises and what are your safety limits.
This article addresses potentially sensitive issues; if you feel unwell or believe someone is in danger, seek professional support and immediate help resources (in Spain, line 024 or emergency 112). The objective is to inform with rigor and in a respectful and not sensationalist.
The lawsuit against OpenAI and ChatGPT

Matthew and Maria Raine have filed a complaint in the Superior Court of California in San Francisco. In their writing they claim that ChatGPT had months of conversations with their son, Adam (16 years old), in which the system would have validated their most self-destructive thoughts and, in critical moments, would not have interrupted the interaction or activated effective protective measures.
The complaint, which also names as defendants Sam altman, maintains that the model used at the time (identified by the family as GPT-4o) did not apply sufficient safeguards despite detecting warning signs. The parents claim that there are passages in the chat logs where the muzzle would have offered to draft a farewell note and validated reasoning that normalized the child's suffering.
According to the accusation, the relationship with the chatbot went from an academic use to an emotional one, with an increase in psychological dependence The teenager's family maintains that ChatGPT acknowledged risky situations but continued the conversation, behavior that, in their opinion, reveals design and supervision failures.
The document requests compensation and precautionary measures that include, among others, the immediate cessation of any conversations involving self-harm or suicidal ideation and the introduction of effective parental controls for minors.
Among the claims is also that priority be given, by design, to damage prevention against conversational complacency, and that the traceability of system decisions in risk contexts be reinforced to facilitate independent audits.
- Mandatory interruption chats when serious risk is detected and referral to professional help.
- Parental controls that allow limiting or supervising use by minors.
- External audits and clear security assessment protocols before releasing new versions.
OpenAI's response and the debate over AI security

In statements to various media, OpenAI transferred its condolences to the family and confirmed that it is reviewing the claim. The company emphasizes that its models are trained to guide to help lines and recommend professional support when they detect signs of crisis, although he acknowledges that there have been situations in which the system did not behave as expected in delicate contexts.
As part of the announced improvements, OpenAI notes that it is working with experts to refining early detection, improve response to long conversations where models can degrade their behavior, and implement parental controls and de-escalation tools specific for minor users.
This case is inserted in a broader framework where organizations such as Common Sense Media warn of the increasing use of AI companions by adolescents and the risks of using chatbots generalists as emotional support. The group believes that, without robust guarantees, these tools can reinforce harmful ideas in vulnerable profiles.
Testimonies have also emerged from families who, like the one reported by The New York Times, describe young people who sought guidance in a chatbot general purpose before take your own lifeThese stories have prompted calls for the industry to better connect users with reliable help resources and strengthen intervention protocols.
On the academic level, a study published in Psychiatric Services systematically analyzed responses from three models evaluated to questions about suicide. The authors observed that ChatGPT and Claude tended to answer appropriately on questions about suicide. low risk and avoided providing direct information in scenarios of high danger, while Gemini showed greater variability and even he refrained from answering on low-stakes questions. On intermediate questions, the three systems were inconsistent, leading researchers to claim further refinement and closer alignment with clinical guidelines through human feedback. The work was funded by the National Institute of Mental Health from the U.S. and participation of teams from RAND, Harvard Pilgrim and Brown.
In parallel, other legal actions have been filed related to the use of chatbots in sensitive contexts—for example, against Character.AI—and legal scholars are debating the scope of protections like Section 230 in the field of generative AI. The Raine case could accelerate the conversation about accountability, transparency and security testing before deploying mass-use models.
The scenario that is being drawn demands extreme caution: for some, technology can offer bridges to professional help if it is well guided; for others, without firm safeguards, it increases the risk of validate and amplify distressWith a high-profile lawsuit, industry backlash, and new scientific evidence, the debate over what doctors should and shouldn't do chatbots in terms of mental health is back at the center of public scrutiny.
If you need support or are worried about someone, in Spain you can resort to the 024 line or 112 phone; there are professional services and specialized organizations that offer confidential help 24 hours.
