Instagram users claim theyre getting censored. Heres what Meta says.

In April, Instagram removed sex toy shop Bellesa’s account — and it appears to be part of a pattern of Meta removing posts and accounts that have to do with sex and LGBTQ issues, advocates warn.

Nonprofit Repro Uncensored told Mashable that last month it received over 130 reports on its website about social media platforms censoring accounts. In this context, “censored” includes account removal, post removal, or shadowbanning, in which a platform deprioritizes an account (such as not showing it in Search or Explore). 

The vast majority of these reports were about Instagram.

Repro Uncensored’s executive director, Martha Dimitratou, confirmed to Mashable that the organization has received more reports in April 2026 than in any other month since it began collecting reports in early 2024. 

What’s happening on Instagram?

The influx of reports is partly because people are now more aware of Repro Uncensored’s project to monitor and track digital censorship and how to report it, Dimitratou said. But generally, there have been a lot of cases recently; at least five or six reports a day, she added.

Instagram banning or shadowbanning sex workers, sex educators, LGBTQ accounts, and others isn’t exactly new. After the twin laws FOSTA/SESTA were passed in 2018, major social platforms began cracking down on sexual content — or content perceived as sexual — even more than previously. 

FOSTA/SESTA states that online platforms are liable for user content that enables sex trafficking or solicits prostitution. While supporters of FOSTA/SESTA claimed it would fight online sex trafficking, the reality is that there’s one reported federal prosecution because of it (as of 2021, there doesn’t appear to be another report since). 

Instead, platforms overcorrected and began wiping sexual content in general, including non-soliciting discussions of sex, sex education, and sexual expression. Due to sex workers and sex-adjacent accounts getting wiped from online spaces, the humans behind them are less safe, according to two studies, one from the Fordham Law Review and the other from the Anti Sex Trafficking Review.

While FOSTA/SESTA has been the law for about eight years now, Instagram’s crackdown is intensifying, according to Repro Uncensored and others. Last year, porn performer Siri Dahl told Mashable she was on her eighth Instagram account, and the platform was getting more strict. OnlyFans model Lily Phillips’s Instagram account was taken down in early May, and she told Mashable that she noticed fellow performers had their accounts taken down as well. 

A 2025 study by the Center for Intimacy Justice titled The Digital Gag found that of the sexual and reproductive health groups studied, 63 percent had organic content (not ads) removed from Meta platforms, and 84 percent of businesses and 76 percent of nonprofits had ads rejected by Meta.

Meta didn’t respond to a question about whether it is removing more accounts now than in the past.

“It’s very unfortunate,” said Dimitratou of the account removals, “it’s something we need to think of critically as a society.” This is happening globally, she said, including the U.S. and Europe. These are “supposed to be places that are respectful of human rights and freedom of speech and freedom of expression,” she said, “and presumably care about reproductive health and other issues.”

Meta’s policy response

Just weeks ago, head of Instagram Adam Mosseri told The Mirror why OnlyFans creators are getting booted off the platform: “They’re promoting explicit content…You can’t promote, you can’t solicit, so you can certainly be on Instagram, but obviously, we have very clear rules about what is and what is not allowed on the platform.”

In terms of Meta’s rules, its rationale for adult sexual solicitation and sexually explicit language policy states:

“People use our services to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and allow for this discussion. We also allow for the discussion of sex worker rights advocacy and sex work regulation. However, we draw the line…when content facilitates sexual encounters or commercial sexual services between adults or when content asks for or offers pornographic or sexual content. We do this to avoid facilitating transactions that may involve trafficking, coercion and non-consensual sexual acts.”

A Meta spokesperson told Mashable, “Every organization and individual on our platforms is subject to the same set of rules, and any claims of enforcement based on group affiliation or advocacy are baseless. We also give people the opportunity to appeal decisions if they think we’ve got it wrong.”

Meta also told Mashable that it’s been clear it wants to allow more speech and reduce enforcement mistakes, and that it’s committed to doing so.

A viral post shared by Repro Uncensored and the @feminist Instagram account included the handles of 51 accounts that reported censorship from Instagram. Meta told Mashable that of the 51, one was Bellesa (which Meta claims was correctly removed for multiple violations), three handles didn’t exist, and the majority of the remaining accounts are fully active.

Some of the accounts and content were disabled in error and have been reinstated; others were correctly enforced, Meta said. 

Meta stated that it, of course, sometimes makes mistakes, but since some instances included content on the borderline of its policies, it’s understandable that its systems flagged them.

Given the reinstatements, Dimitratou said it’s concerning that Repro Uncensored has to run campaigns both digitally and in the press to gain attention and get accounts reinstated.

“We want citizens from any country to be able, on their own, to use the tools and mechanisms that are supposed to have been set in place to reinstate their accounts,” she said. Often, account holders go through the standard appeals processes but don’t get answers. 

Repro Uncensored is calling for improved appeals processes, as well as more transparent, accountable content moderation systems. Dimitratou says the group believes AI moderation often fails to account for context, particularly in discussions of women’s health and queer issues, and instead flags content based on keywords alone.

She also noted that coordinated reporting by bad actors, such as anti-rights groups, may play a role in account removals.

And the stakes of this kind of moderation extend beyond social media visibility. For activists and educators, losing access to platforms can directly affect how people find critical health information. Since the overturning of Roe v. Wade in 2022, conservatives have been pushing to limit abortion access in the states. (As of May 4, the Supreme Court has restored access to the mail-order abortion pill.) 

“Access to reproductive health and abortion has been dramatically, even more so, under threat,” she said. “There is a larger picture of how [our] rights are very much being threatened, and so censorship is the battleground of how a lot of this information is being accessed.”

So these aren’t isolated incidents, nor is it merely about posting online. According to Dimitratou, it’s about the foundation of how we communicate, share information, and access healthcare.

​Mashable

Leave a Reply

Your email address will not be published. Required fields are marked *