The start of advertising in ChatGPT OpenAI's move has opened a new front in the race for generative artificial intelligence. Just weeks after announcing that it will begin showing ads based on user conversations, its rival Anthropic has taken advantage of the Super Bowl to launch a campaign that openly mocks that decision and introduces his assistant Claude as an ad-free alternative.
With a series of commercials designed for the major American television platform, the company founded by former OpenAI executives turns a very serious concern for the sector into satire: What happens when AI stops focusing on helping the user and starts prioritizing advertising monetization?At the same time, it reinforces its brand positioning by promising not to finance Claude through advertising, a stance with direct implications for the trust of users and companies, also in Europe.
A Super Bowl campaign against AI ads
under the motto “Ads are coming to AI. But not to Claude” (“Ads are coming to AI. But not to Claude”), Anthropic has premiered its first Super Bowl ad with a campaign created by the agency Mother and produced by Biscuit Filmworks. The objective is very clear: to forcefully contrast their product strategy with OpenAI's advertising shift.
The center piece is a 30 second spot (with an extended one-minute version) in which a young man, unable to complete a set of pull-ups in a park, asks a muscular man for advice. The man responds with a detailed, almost robotic, explanation of how to get great abs, mimicking the tone of an AI chatbot. Suddenly, the explanation veers off course and becomes a blatant advertisement for fictional insoles called “StepBoost Max.”
The twist parodies precisely the kind of scenarios that worry part of the industry: a seemingly useful conversation becomes contaminated with trade recommendations that the user has not requestedThe message “Ads reach the AI. But not Claude” then appears on screen, underscoring Anthropic’s commitment not to go down that path.
According to reports, the campaign includes Four ads that recreate dialogues between users and AI systemswhere the responses mix legitimate advice with irrelevant products and services. Although OpenAI and ChatGPT are not explicitly mentioned, the reference is transparent, especially for an audience already familiar with the arrival of ads on ChatGPT.
In another ad, a user seeks help to communicate better with his mother and ends up receiving a surreal suggestion: sign her up for a dating platform for seniorsOnce again, the piece illustrates how advertising logic could twist the purpose of the conversation, shifting the focus from emotional support to a commercial recommendation with no real connection to the need expressed.
Claude as an unannounced assistant: Anthropic's promise
Beyond the Super Bowl buzz, Anthropic has accompanied the ads with a public manifesto in which it establishes its position on advertising using artificial intelligence tools. The company acknowledges the positive role that The ads have appeared on other digital services. —like free email, social networks or search engines—, but it draws a red line: Conversations with an AI assistant should not become an advertising medium.
In that document, the company maintains that Claude must behave “unambiguously in the best interests of users” and that introducing advertisements would be incompatible with that objective. This translates into several concrete commitments: There will be no "sponsored" links next to the answersRecommendations will not be influenced by advertisers, and third-party product placements that the user has not explicitly requested will not be integrated.
Anthropic warns that advertising, once inside a platform, tends to condition product development and the revenue targetsIn the case of chatbots, this could generate subtle biases: for example, when faced with a user who mentions sleep problems, an ad-free assistant would explore various causes and solutions; whereas a system with advertising incentives might be tempted to steer the conversation towards a supplement or a paid service that generates a commission.
The company also draws attention to the risk of designing the experience so that users spend more time conversing with AI simply because This increase in engagement improves business metrics.From their point of view, the most valuable interaction is not always the longest, and a truly helpful assistant should be able to resolve issues quickly and clearly, even if that reduces usage time.
This statement aligns with the brand platform "Keep Thinking"Launched by Anthropic months ago, the campaign presents Claude as a tool for reflection and in-depth work, rather than a product optimized to capture and retain attention at any cost. This comes at a time when Europe is debating the impact of AI on fundamental rights and digital well-being. The message resonates particularly strongly with regulators and companies concerned about technological ethics..
A business model that is not dependent on advertising
Claude's anti-campaign stance is not just a communicative gesture; it is directly linked to Anthropic's business modelThe company states that its revenue strategy is based on a combination of business contracts y paid subscriptionsThis allows them to do without advertising monetization in their conversational assistant.
This approach means forgoing, at least for now, a potentially enormous source of revenue. Various market analyses indicate that chatbot advertising still represents a very small portion of total AI-powered search investment—around $2.000 billion this year according to eMarketer estimates—but projections suggest that The segment could exceed $25.000 billion by 2029Anthropic therefore assumes a significant opportunity cost.
The company admits that its decision implies competitive disadvantages compared to other players who choose to finance free services with advertisingAnd it claims to respect that other AI firms may reach different conclusions. However, it justifies its position by pointing out that chatbot users share highly sensitive information—including data related to mental health, personal finances, or complex family situations—and that introducing advertising in that context could be considered “exploitative.”
For markets like the European one, where the General Regulation of Data Protection (RGPD) And with the upcoming AI regulatory framework emphasizing transparency and risk minimization, this promise of no announcements is interpreted as a potential differentiating factor. Companies and public administrations seeking to reduce their exposure to models based on intensive data exploitation could see Claude as an alternative aligned with stricter internal policies.
Meanwhile, Anthropic presents itself as a less well-known brand than OpenAI among the general public, but with a greater relative weight in acquiring enterprise clients. The Super Bowl campaign aims to close that notoriety gap, taking advantage of one of the world's largest advertising platforms to position itself as "the principled choice" in the generative AI market.
The arrival of ads on ChatGPT and OpenAI's vision
The starting point for this confrontation is OpenAI's recent announcement that will begin to show custom ads in ChatGPT conversationsThe ads will be personalized based on the content of the queries, which could translate, for example, into links to flights and hotels after asking for help planning a vacation or into promotions of professional tools when discussing work tasks.
OpenAI, however, has tried to preempt criticism by detailing some safeguards. The company claims that The advertisements will not influence the responses generated by the model.that conversations will not be shared with advertisers and that all promotional messages will appear clearly labeled and located at the bottom of the interface.
In addition, the company indicates that users will be able to disable ad personalizationThe company stated that ads will not be shown to users under 18 and that there will be content exclusions in particularly sensitive areas, such as politics and mental health. The stated goal is to maintain a certain level of trust without abandoning a model that helps fund the free version of the service.
Sam Altman, CEO of OpenAI, previously described the introduction of ads in ChatGPT as a "last resource"However, in his latest statements he has made it clear that advertising is already part of the business plan, albeit with limitations and without being integrated, for now, into the requests channeled through assistants like Siri.
Sam Altman's response to the Anthropic campaign
OpenAI's reaction to Anthropic's advertising offensive was swift. Through a lengthy post on X (formerly Twitter), Sam Altman has admitted that he found the Super Bowl commercials amusing.But he has labeled them “dishonest.” According to the executive, the company has no intention of adopting the kind of practices that Anthropic’s ads caricature.
Altman argues that the OpenAI's guiding principle regarding announcements It's precisely about avoiding the intrusive approach shown in their rival's videos, and they claim to be aware of the backlash such an aggressive approach would generate among users. In this way, they try to separate the exaggerated scene presented by Anthropic from the reality of their business plans.
The executive also criticizes Anthropic for using a “double standard” when criticizing supposedly misleading advertisements that, in practice, do not existMeanwhile, they themselves are leveraging the biggest advertising event of the year to spread their message. At the same time, they argue that the advertising will allow ChatGPT to remain accessible to broad segments of the population, something which, in their view, differentiates their company from a competitor they describe as focused on a product "expensive for wealthy people."
In his response, Altman goes beyond the issue of the ads and accuses Anthropic of wanting “controlling what people do with AI”He criticizes them for restricting the use of their programming products to companies that do not fit their criteria - including OpenAI itself - and suggests that they also try to influence the norms on how AI should be used in general and what acceptable business models are.
The head of OpenAI concludes by reinforcing his company's commitment to a more open and democratic decision-makingWith the stated goal of building a robust, safe, and beneficial general AI ecosystem for as many people as possible, the company promises to continue reducing prices and expanding the information provided by its models, in an attempt to counter the narrative that advertising and accessibility are necessarily incompatible.
The exchange of messages between Anthropic and OpenAI marks a turning point in the conversation about how AI assistants, which are beginning to be integrated into the daily lives of millions of European and Spanish citizens, will be financed. While some argue that advertising is the most effective tool for sustaining free services on a large scale, others warn of the risks of mixing commercial interests with interactions based on sensitive personal data. Amid this debate, users and companies will have to decide which model best aligns with their expectations of trust, transparency, and real-world utility when interacting with artificial intelligence.