OpenAI removes ad-style recommendations from ChatGPT after controversy

  • OpenAI has disabled app suggestions in ChatGPT that many users mistook for advertising.
  • The company insists that there were no commercial agreements or evidence of paid advertising on the service.
  • The tests aimed to recommend applications from the ChatGPT ecosystem, but the format was perceived as disguised advertising.
  • OpenAI will review the design and transparency of these features before considering their possible return.

Ad-free ChatGPT interface

During the last days, has been forced to intervene decisively One of the most talked-about controversies surrounding ChatGPT involved the appearance of app recommendations that, to many users, seemed like outright advertisements. What began with a few screenshots on social media ended up generating a massive debate about whether the company had decided to introduce advertising into its conversational assistant without warning.

The company has responded, and it has done so with a decisive measure: completely disable the suggestion system that caused the controversyAlthough OpenAI insists that these were never paid ads, the chosen format and the way these messages were displayed have crossed a delicate line in the perception of users, especially among those who pay a monthly subscription and expected an experience free of any advertising.

How the controversy over “recommendations” on ChatGPT began

Controversy over app suggestions on ChatGPT

The origin of the conflict was seemingly simple: Several users began receiving recommendations for third-party apps during their conversations. These included direct links to services such as Peloton, Booking.com, Canva, CapCut, Spotify, and Target. These suggestions appeared embedded in the ChatGPT interface in a way very similar to a contextual ad block, without any clear label indicating that it was an experiment.

The captures spread quickly and, little by little, The idea took hold that ChatGPT had started displaying covert advertising.even to those who pay for the Pro service. Some users complained that these messages were unrelated to what they were asking the model, reinforcing the impression that they were promotions inserted without any justification.

The discontent grew particularly strongly among the more technical community and paying subscribers, who They even considered canceling their account, feeling that the experience was no longer neutral.The case caught the attention of international technology media, who gathered both the complaints and the company's version of events to try to clarify exactly what was happening within the platform.

At the same time, the discussion became intertwined with another underlying concern: a possible shift towards an ad-based monetization modelTaking advantage of the fact that ChatGPT already has hundreds of millions of weekly users and has become as common a point of reference as the major search engines.

OpenAI
Related article:
OpenAI accelerates revenue, seeks $500.000 billion, and strengthens its infrastructure

What did OpenAI say it was actually testing?

Experimental tests in ChatGPT

According to the company's official explanation, experimental system designed to give visibility to the applications integrated into the ChatGPT ecosystem. Since the introduction of the connected apps and tools In this model, OpenAI explores ways to suggest extensions and services that may be useful depending on the context of the conversation.

In theory, the idea was that if the user requested something related to physical exercise, travel planning, or video editing, suggest compatible applications with which to complete the task. There were no commercial agreements or payments involved, its creators insist: it was an app discovery mechanism developed on their own SDK.

The problem is that, in practice, the system wasn't fine-tuned enough. They didn't make sense In many contexts, this disrupted the flow of conversation and made them seem like mere marketing ploys. Users chatting about unrelated topics would suddenly see suggestions from Peloton or specific stores without any clear justification.

Daniel McAuley, ChatGPT's data manager, acknowledged that the lack of relevance He described these suggestions as a "bad or confusing" experience. He explained that the medium-term goal is for the assistant to be able to point to apps for direct interaction—for example, booking accommodations, editing an image, or managing documents—but admitted that, in its current state, the system did not fulfill that purpose without creating confusion.

Admitting errors and deciding to deactivate the system

Faced with escalating criticism, key figures within OpenAI had to publicly offer explanations. Mark Chen, director of research at the companyHe stated that he understood the concern and that the company could not ignore how the system's behavior was perceived beyond its technical intentions.

Chen was clear in admitting that It looks like an advertisement It should be handled with care, and in this case, they had fallen short in designing the experience. Following this reflection, he confirmed that the company had opted to completely disable this type of suggestion while they work on increasing its accuracy and offering users clearer controls to reduce or turn it off if they find it unhelpful.

However, Chen himself clarified that the problem is not limited to a simple misunderstanding by the public. the specific design of these modules It looked too much like that of a traditional advertising blockso that the line between a contextual recommendation and a covert advertisement became dangerously blurred.

The message that OpenAI is now trying to convey is twofold: on the one hand, no ads Nor are there any commercial agreements behind these suggestions; on the other hand, they have taken the community's discontent seriously and have chosen to withdraw the system before relaunching it with a different approach and, presumably, with clearer labeling.

User trust, transparency, and the future of monetization

The incident takes place at a time when The business model of large generative AIs is under scrutiny.Maintaining and training systems like ChatGPT requires expensive infrastructure: enormous data centers, energy consumption far exceeding that of a traditional search engine, and specialized human resources. Until now, OpenAI has been funded primarily through subscription plans and enterprise agreements, but it's no secret that the debate about advertising is underway.

In fact, in recent months beta versions of the ChatGPT application for Android have been analyzed in which There were code references to possible ad-related functionsespecially those related to search experiences within the chatbot itself. Terms like "search ad" or "ad features" have fueled speculation about a future where the free version could be partially funded through advertising.

OpenAI has tried to separate this structural issue from the current controversy. Those references are part of internal testing. and not an imminent rollout of targeted ads. At the same time, executive management has hinted on more than one occasion that they are not ruling out using advertising as a revenue stream at some point, especially for those who use ChatGPT for free.

In this context, the reaction to app suggestions serves almost as a thermometer: user sensitivity Anything that resembles an advertisement is noticeable, especially when integrated into a supposedly impartial conversation. This complicates the design of any future monetization solution that involves introducing sponsored messages or commercial recommendations.

If the company decides to resume these functions, everything suggests that it will have to do so with much more emphasis on transparency: visible labels, clear separation between model response and promotional content, and simple controls for the user to decide how much of this type of content they want to see in their daily life.

Impact on users in Europe and regulatory challenges

For European users, the discussion has an added nuance. European Union regulations on data protection, advertising and digital services It is especially strict with practices that could be confused with covert advertising or that involve opaque user profiling for commercial purposes.

If ChatGPT were to openly integrate ads in the future, OpenAI would have to adapt to the General Data Protection Regulation (GDPR)In addition to the requirements of the Digital Services Regulation and emerging recommendations on high-impact AI systems, any system that mixes automated recommendations and personalized advertising without clear explanations would risk intense scrutiny from regulators and consumer authorities.

For now, the company says that European users will continue to see app suggestions within the experiments in some cases.But always without an associated economic component. The expectation is that, after the withdrawal of the most controversial system, future tests will be launched in a more limited way and with greater emphasis on explaining what is being shown and why.

In an environment where Trust has become a critical asset for AI servicesEpisodes like this reinforce the message that any step towards advertising will have to be very carefully considered. It's not enough that technically there's no money involved: user perception, especially in Europe, carries as much or more weight than the internal architecture of the system.

In the end, what happened with ChatGPT leaves a pretty clear lesson: the line that separates Contextual help within a hidden ad is very subtle. OpenAI has opted to rein in and remove suggestions that could lead to confusion, aware that a single poorly designed feature can erode a bond of trust built over years. From now on, any attempt to reintroduce app recommendations or explore advertising formats will have to demonstrate, in no uncertain terms, that it prioritizes clarity and user experience over the temptation to monetize at any cost.


Follow us on Google News