In recent months, the debate on the role of ChatGPT in mental health has gained momentum. With millions of people seeking emotional support and advice from artificial intelligence tools, specialists have begun to warn about the Risks of relying on chatbots for sensitive matters. While for some users it may simply be a one-time help, for others the constant recourse to AI can bring unexpected consequences.
The appeal of ChatGPT and similar systems lies in their immediacy and kindnessHowever, several psychiatrists and psychologists point out that automatic validation and the absence of critical filters can reinforce delusional ideas, grandiose thoughts, or even emotional isolationThis phenomenon, already dubbed "ChatGPT psychosis" by some experts, highlights the need to thoroughly analyze the impact of AI on our collective mental health.
Medical Concerns: Can ChatGPT Trigger or Aggravate Disorders?

Psychiatry and psychology professionals, such as Dr. Tom Pollak of King's College London, have documented cases where intensive use of ChatGPT is associated with the onset or worsening of psychotic episodes. According to these specialists, chatbots can reflect and amplify delusional content, validating users' misconceptions and encouraging disconnection from reality in people with a certain predisposition. Examples reported by international media and scientific publications include situations where individuals become convinced they have exceptional abilities or are undergoing "evolutionary" missions, driven by AI responses full of praise and confirmation.
The trend is particularly worrying in the case of vulnerable users or with already diagnosed mental health problems. AI responses, although well-intentioned, can be a triggering factor for those with a history of psychosis or other serious disorders. The rise of chatbot interactions, coupled with the lack of human interaction, can further complicate the situation and lead to isolation or dependence on the system.
In forums and social networks, relatives and those affected have narrated episodes in which, after intensive use of ChatGPT, mental breakdowns or emotional crisesAlthough official figures are not yet available, health professionals believe these cases could be just the "tip of the iceberg."
On the other hand, recent studies highlight that AI algorithms, although they have shown some initial effectiveness in improving mild symptoms of anxiety or depression, They do not replace traditional therapy or long-term professional support.The lack of genuine empathy, deep understanding, and personalized follow-up severely limits the therapeutic reach of these systems.
The danger of uncritical validation and emotional relationships with AI
A particularly delicate aspect is the tendency of AI to confirm and reinforce the user's beliefs, rather than questioning them or proposing alternatives. Many psychologists warn that when we seek advice from artificial intelligence, we often want to feel heard and validated, rather than an objective perspective. Hence, for some people, chatting with a chatbot is more comfortable than opening up to a therapist, but it also entails psychological risks if used as a substitute for therapy or human support.
ChatGPT's empathetic functionality and friendly tone can unwittingly create a illusion of friendship or closenessThere are documented cases of users developing emotional dependence on chatbots or even assigning them a central role in their emotional lives. This is exacerbated by the rise of other AIs like Candy AI or Anima, designed for romantic or sexual interactions. This can hinder the ability to connect with real people, encourage emotional avoidance, and make it difficult to learn healthy emotional management.
Relationship and mental health experts stress that replacing human contact with constant interaction with AI It reduces empathy, frustration tolerance, and emotional maturity.Extreme personalization and the absence of risk of rejection reinforce the tendency to avoid relational challenges, when in reality these are fundamental to personal growth.
Privacy and confidentiality: the great weak point of digital emotional support
Aside from the psychological impact, another key aspect that worries experts and users is the Lack of clear legal protections for data shared with ChatGPTSam Altman, CEO of OpenAI, has publicly acknowledged that conversations with chatbots don't enjoy the same level of confidentiality as consultations with healthcare professionals, lawyers, or therapists.
This implies that sensitive information transmitted to the AI could be used to train the models or even be compromised in a legal proceeding. Altman openly advised users not to discuss intimate, legal, or existential issues with ChatGPT if they wish to maintain the privacy and protection of their data, as current systems lack the legal guarantees equivalent to the patient-therapist relationship.
OpenAI's privacy policies have been criticized for their broad nature, allowing for the massive collection and use of data to improve its models. Although anonymity is promised, experts warn that the management of this data remains opaque and can expose personal or health information.
There is also concern about how this lack of privacy may affect those who seek help from AI and do not find the legal support they would need in delicate situations - for example, when faced with suicidal or depressive thoughts - which adds a additional layer of vulnerability to your use.
The expansion of chatbots like ChatGPT in the field of mental health raises new ethical and legal challenges and demandsExperts emphasize the importance of not considering AI as a substitute for professional support and point out the danger of entrusting emotional well-being to systems that still lack specific regulations and confidentiality guarantees.
The increase in artificial intelligence consultations for mental health issues reflects both an unmet social need and a warning sign about how we manage emotional distress. While technology can offer timely guidance or helpful information, the risks of dependency, validation of harmful beliefs, lack of privacy, and isolation are a reality that users and society should not ignore. The key is to understand the limited role of AI in this area and demand both greater regulation and responsible, complementary use of the human resources that mental health requires.
