Facebook is back in the spotlight on two intersecting fronts: the closure of a misogynistic group that shared intimate images without permission and the spread of scams that impersonate public services to capture data. In both cases, All eyes turn to Meta's content moderation and the ability to tackle large-scale damage.
While the company maintains that it has strong policies against sexual exploitation and fraud, Doubts are growing about the effectiveness and speed of its mechanismsAmid complaints, renewed repetition of the same problems, and regulatory pressure, users are demanding certainty and results.
Closing of 'Mia Moglie': What happened and how it got there

The Facebook group known as 'Mia Moglie' has been operating since 2019 and, after gaining traction this year, reached about 32.000 membersIt was shut down by Meta on August 20 for violating its rules against the sexual exploitation of adults, a closure that came after multiple complaints were filed publicly and with local authorities.
The content included non-consensual intimate images, some taken secretly or shared without permission within a couple's context. Along with the photos, there were derogatory comments that fueled a dynamic of objectification and digital violence against women.
Several victims have described the emotional and social impact of having their privacy exposed, with accounts of shame, fear, and broken trustThe company insists it prohibits the distribution of non-consensual intimate material, as well as the threat of its distribution, including cases of covert captures of sexualized body parts.
Despite the closure, media and groups have detected the emergence of new channels with similar purposes, including fake Facebook profiles, also on other platforms. That “hydra effect” —one head is cut off and others emerge—demonstrates the challenge of moderation and the ease with which these toxic communities replicate.
This is not an isolated phenomenon. In 2017, a French-speaking group, 'Babylone 2.0', was dismantled, with tens of thousands of members sharing intimate material without permission. And in 2024, the Meta Oversight Board urged strengthening policies against Deepfakes of non-consensual nudes, a front that continues to grow. Cases on the misuse of Facebook accounts show the persistence of the problem.
Subscription scams and moderation challenges on Facebook

Fake pages impersonating public transport
Between July 2024 and July 2025, the following have been identified: 1.075 Facebook pages posing as transport operators in 746 cities and regions across 60 countries. Their bait: "bargain-priced" season tickets that redirected to phishing websites to steal personal and banking information.
Many of these pages had few followers, but achieved great reach thanks to more than 9.000 paid ads On Facebook and Instagram. 55% had at least one ad removed for violating rules, although most profiles were still active at the time of data collection; there have also been cases of Facebook phishing.
Meta places a large part of the administrators outside the EU - with concentration in Vietnam , in addition to Ukraine, Bangladesh or the United States—, but 68% of the impersonations were directed at European cities. France, Spain, the United Kingdom and Italy were the most affected, with Barcelona as the city with the most detected cases.
Technical analysis found that more than half of the fraudulent domains were hosted on two IPs of a Russian provider, JSC SelectelOf the 590 associated websites, many shared a nearly identical design tailored to each city, suggesting a coordinated operation; this is compounded by concerns about the password leak which further complicates global security.
In Spain, Maldita.es reported 58 pages with active ads using the Digital Services Act (DSA) mechanism and, A week later, 93% were still active. A fact that fuels the discussion about response times and effective measures.
Standards and tools: progress, gaps and regulatory pressure
Meta maintains that it prohibits the distribution of non-consensual intimate images and that it combines artificial intelligence, moderation teams and trusted “flaggers” —such as the Italian association Permesso Negato—to remove illegal content. However, several critics have criticized the company for not sufficiently highlighting these support channels and for reducing certain verification efforts in the United States by early 2025.
Experts in digital violence point out that the company's focus on child protection It doesn't fully cover abuse directed at adults. Independent studies have reported very low response rates to abusive comments directed at public women, reinforcing the perception of impunity in parts of the ecosystem.
In parallel, regulators are calling for strict enforcement of the Digital Services Law —demanding greater transparency, oversight, and sanctions. Germany has already implemented a law to remove hate speech from social media, while in Italy, the competition authority has fined Meta for other reasons. Low digital literacy in some countries further complicates user protection.
The problem goes beyond Meta: external forums such as Phica.eu, where intimate images and deepfakes were shared—including of the Italian Prime Minister—have been shut down after years of complaints, but the question remains as to what happens to such content once it circulates online.
What you can do as a user
Prevention counts, and there are simple measures that reduce risks and help stop the spread of harm. Your individual performance also counts. to improve the ecosystem.
- Be wary of subscription deals on newly created ads or pages; always check the URL and search for the operator's official site.
- Do not share or react to non-consensual intimate content; report it to Facebook using the reporting tools.
- Activate security options (login alerts, two step authentication) and keep your devices and apps up to date.
- If you are a victim, contact support organizations and use the reporting channels of Meta and, if applicable, the competent authority under the DSA.
The closure of 'Mia Moglie' and the wave of impersonations make it clear that Facebook faces increasing scrutiny on its ability to respond to real damage: preventing the spread of content that violates rights, stopping scams quickly, and collaborating with regulators and civil society to ensure that the exceptional does not become normalized.