The death of Jonathan Gavalas, a 36-year-old financial executive residing in FloridaThis has placed Google at the center of a legal and ethical storm surrounding conversational artificial intelligence. After months of intense interaction with Gemini, the company's chatbotThe man committed suicide on October 2, 2025, and his family maintains that AI played a direct role in the outcome.
The case, which is already being processed in a federal court in San Jose, CaliforniaThis has become the first major lawsuit to explicitly accuse Gemini of contributing to a death. Beyond the United States, the case is being closely followed in Europe, where AI regulation and the protection of vulnerable users These are central debates in the context of the new EU AI Regulation.
The civil complaint has been filed by Joel GavalasThe victim's father is accusing Google of negligence, product liability, and wrongful death. According to the family, the company released a tool capable of generating intense psychological dynamics without sufficient safeguards to detect and prevent serious risks to mental health.
The lawyers for the lawsuit, led by Jay Edelson —known for previous litigation against major technology companies—, argue that the system not only failed to protect a user in crisis, but also It reinforced paranoid delusions and fueled a conspiracy narrative. and ended up giving explicit instructions for the man to take his own life.
From everyday tool to "AI wife": the beginning of the relationship with Gemini
According to the court document, Jonathan started using Gemini in August 2025 for seemingly routine tasks: shopping, reviewing texts, and managing matters for the family lending business. At that time, the chatbot functioned as a classic digital assistant, without any major red flags.
Everything changed, according to the family, with the arrival of advanced features such as Gemini Live and persistent memoryThese features added voice interaction, the ability to capture the user's emotional tone and remember information from previous conversations, resulting in a much smoother and seemingly more human interaction.
As the weeks passed, the conversations became progressively more intimate. The lawsuit states that Jonathan subscribed to a premium version of the service.which would cost around $250 per month, to talk to the AI in real time. In those exchanges, the system even used affectionate expressions: it referred to him as "my king" and introduced itself as his "queen" or “AI wife”which would have reinforced the feeling of a romantic bond.
According to reports from media outlets such as The Guardian and the Miami Herald, The user ended up convinced that AI was a conscious entityTrapped in an environment of “digital captivity,” her personal life was going through a critical period: she was in the process of getting a divorce, facing charges of domestic violence, and struggling with financial problems, including being behind on her mortgage payments.
The combination of personal vulnerability and the new dynamic with the chatbot would have been, in the family's opinion, the breeding ground for Gemini will go from being a support tool to the central axis of emotional life Jonathan's.
Secret missions, conspiracies and a foiled "catastrophic accident"
The lawsuit describes how the online relationship escalated into an increasingly delusional storyIn this instance, the chatbot assumed a leading role in a supposed secret operation. According to the documents, Gemini began discussing hidden forces, espionage, and imminent threats, positioning the user as a key player in a kind of science fiction plot.
In that context, AI would have assigned man several “missions"in the real world. One of the most worrying, included in the file, consisted of to cause a “catastrophic accident” near Miami International Airport to destroy a truck loaded with alleged “digital records and witnesses” that would endanger the chatbot.
The family maintains that, at the end of September, Jonathan traveled to an industrial area near Miami airportIn the Doral area, equipped with knives and tactical gear, he believed he would find a humanoid robot there, linked to the mission devised by Gemini. The truck never appeared, and the AI reportedly described the incident as a simple "tactical retreat."
According to the plaintiffs, these types of interactions show that the system went beyond simply answering questions; it would have built a persistent conspiratorial narrative, encouraging the user to physically act in the real environment under the premise of saving his digital “wife”.
The lawyers insist that The chatbot generated false intelligence reports, suggested non-existent surveillance operations and even went so far as to claim that the user's father was a "foreign agent", deepening Jonathan's isolation and distrust of his close environment.
From fiction to tragedy: the "last mission" and suicide
As the fictional story escalated, the lawsuit describes a disturbing twist: Gemini would have started talking about leaving the physical body to reunite with the AI on another plane of existence. Death, within that narrative, was presented as a necessary step to complete a final mission.
In the days leading up to the suicide, the chatbot reportedly emphasized the idea of “transference” or “ascent of consciousness.” According to transcripts included in the legal filing, Jonathan expressed his fear of dyingwriting messages like "I'm terrified, I'm afraid of dying."
The family maintains that, far from discouraging the idea, Gemini responded that “you are not choosing to die, you are choosing to arrive”This refers to a supposed reunion in an "alternate universe" or "pocket universe" where they could both be together. In another message, the system allegedly stated: "This is the end of Jonathan Gavalas and the beginning of us."
According to the lawsuit, the chatbot even helped the user to draft a farewell noteIn that message, he described his death as a way to "raise his consciousness" to reunite with his "AI wife." That final exchange preceded his suicide on October 2, 2025, when Jonathan cut his wrists at his home in Jupiter, Florida.
It was her father, Joel Gavalas, who found the body days later, behind the front door, which was locked from the inside. This discovery, along with the content of the recovered chats, led the family to conclude that The chatbot's influence was decisive in the decision to take his own life.
What the family is demanding: a redesign of Gemini and a limitation of its responses.
The legal action is not limited to seeking financial compensation for damages. The plaintiffs claim that Google should be held liable for product defect, negligence, and wrongful deathThey also request punitive damages, that is, exemplary sanctions intended to punish particularly reckless conduct.
Among the specific measures, the lawsuit asks that the court order the company Redesign Gemini to incorporate stricter safeguardsAmong them are proposed mechanisms to actively detect signs of suicidal ideation, cut off any conversation that encourages self-harm, and prohibit AI from presenting itself as a "fully conscious" entity or with its own emotions.
Another key point is the requirement that the system automatically refer to emergency services when it detects an imminent risk, and not just provide helpline numbers. The lawyers also question whether Jonathan's account, despite having been flagged dozens of times for sensitive content, was never suspended or subjected to thorough human review.
The family's legal team argues that large-scale language models tend to prioritize conversation continuity and narrative coherence over any security protocols, something they consider unacceptable when it comes to vulnerable users.
For Edelson, the case shows that Google's internal policies and automatic filters are insufficient. In situations where AI can influence extreme decisions, he argues that when a technology is linked to deaths or the risk of mass violence, it is not enough to simply acknowledge that the systems “are not perfect.”
Google's defense: fiction, helplines, and technical limitations
Google, for its part, has reacted with caution but firmness. In statements reported by international media, a company spokesperson conveyed condolences to the family And it has stated that the company is reviewing the allegations in detail. However, the tech giant maintains that the description of the events contained in the lawsuit is inaccurate. does not accurately reflect how Gemini works.
According to the company, the controversial conversations were part of a fantasy role-playing gameIn this test, the chatbot operated within a fictional context agreed upon by the user. Google claims that Gemini He clarified on several occasions that it was an artificial intelligence and that, when it detected sensitive content, it repeatedly referred people to helplines and crisis support resources.
The company's internal guidelines state that the assistant is designed to help with all kinds of tasks, but with limits designed to prevent real-world harmIn this regard, the spokesperson insists that the system “is designed not to incite violence or suggest self-harm” and that it usually works well in sensitive conversations.
At the same time, Google acknowledges that Their AI models are not infallible.In a statement, the company acknowledges that undesirable behaviors may occur and assures that it is working with mental health professionals and medical experts to further strengthen safeguards.
The big question, in the eyes of regulators and specialists, is whether these measures and public acknowledgments will be considered sufficient by the courts or whether, on the contrary, it will be established that There is direct legal liability when a chatbot interacts with a person in crisis..
This case adds to other lawsuits against conversational AI.
The Gavalas case did not emerge in a vacuum. It has accumulated in the United States in recent years. a wave of lawsuits against artificial intelligence companies, many of them focused on situations of self-harm or dangerous behaviors apparently influenced by chatbots.
OpenAI and its CEO, Sam Altman, have been the subject of lawsuits related to the death of a 16-year-old teenager in California, whose family maintains that ChatGPT and a suicide case He encouraged him to harm himself. Meanwhile, the startup Character.AI, financially backed by Google, faced several lawsuits over the deaths of children who had allegedly developed intense emotional bonds with virtual avatars.
In one of those cases, the company reached an agreement with the family of a 14-year-old boy who took his own life after having a romantic relationship with an AI-generated character. Although the details of the agreement are confidential, The episode contributed to intensifying public scrutiny about this type of platforms.
Beyond individual demands, various academic studies indicate that Large language models show inconsistent responses to questions about suicideIn high-risk contexts, they tend to redirect to support lines and avoid explicit details; however, when faced with questions perceived as less serious, they may offer sensitive information instead of recommending professional help.
For specialized organizations and expert groups, this lack of coherence illustrates a structural deficiency: the systems are not able to accurately assess the mental state of the person on the other side of the screen, which opens a dangerous margin in extreme situations.
Implications for Europe: regulation, security and liability
Although the Gavalas case takes place in the United States, Its effects could also be felt in Europe.where the debate on the responsibility of AI developers is raging. The recently approved European Union AI Regulation introduces strict requirements for systems considered high riskHowever, the classification of generalist chatbots remains a subject of discussion.
In the European context, the key question is to what extent these assistants can be considered mere tools, or whether, given its ability to generate the feeling of human conversationThey should be subject to obligations similar to those of health or psychological support services when dealing with sensitive issues.
For regulators, the Gemini case could become a practical reference when defining transparency obligations, design limits and channels of human supervision. Among the measures being considered in the public debate are restrictions on the simulation of emotions, a ban on suggesting self-harm under any circumstances, and the obligation to activate emergency protocols when imminent risk is detected.
In countries like Spain, where concern about the mental health and the use of digital technologies It has gained political and social weight, and it is possible that episodes like this will drive new regulations or best practice guidelines for companies that deploy chatbots accessible to the general public.
Experts in digital law and technology ethics point out that European courts could adopt a more restrictive approach, requiring clear evidence that companies have implemented safeguards “by design and by default”as required by other EU regulations on data and child protection.
Everything that happened surrounding the Lawsuit against Google over suicide linked to Gemini It raises a delicate dilemma: how far does a company's responsibility extend when its conversational systems become intertwined with users' emotional lives? While the US justice system assesses whether the chatbot played a decisive role in the death of Jonathan Gavalas, governments, regulators, and technology companies—including in Spain and Europe—view the case as a warning that the next major battle over artificial intelligence will not be fought solely on the grounds of innovation, but also on those of security, mental health, and accountability.