Meta moves to curb AI-driven 'unoriginal' content on Facebook
Meta announced on Monday that it will begin enforcing stricter measures against Facebook accounts that repeatedly share unoriginal content, including reused text, images, and videos, as part of a broader push to protect content integrity and support original creators on the platform.
The announcement, made through a blog post on Meta's website, revealed that it has already removed around 10 million accounts this year for impersonating well-known content creators, and taken action against an additional 500,000 profiles engaged in spam tactics or generating fake engagement.
These actions include reducing the visibility of posts and comments, as well as suspending access to Facebook’s monetisation programmes.
The update follows similar policy changes by YouTube, which recently moved to clarify its own stance on mass-produced, repetitive videos, particularly those enabled by generative AI. Meta stated that users who transform existing content through commentary, reactions, or trends will not be affected.
Instead, enforcement will focus on accounts that simply repost material - either through spam networks or by impersonating the original creator.
Accounts found to be repeatedly violating these standards will face penalties, including being barred from monetising their content and a reduction in the distribution of their posts across Facebook’s algorithmic feeds.
Meta is also testing a new feature that will insert links in duplicate videos directing viewers to the original source, a move aimed at ensuring that original creators receive proper attribution.
The shift arrives at a time when content across social platforms has become increasingly saturated with low-quality, AI-generated media.
While Meta did not explicitly mention “AI slop” (a term used to describe bland or poorly produced AI content) the company’s guidance appears to address such material indirectly.
The announcement comes amid growing frustration among creators about Facebook’s automated enforcement mechanisms.
A petition signed by nearly 30,000 users has called for better human oversight and clearer appeals processes, citing widespread issues with wrongful account suspensions, according to TechCrunch.
The new enforcement policies will be rolled out gradually over the coming months, giving creators time to adapt. Facebook’s Professional Dashboard now includes post-level insights to help users understand how their content is being evaluated, and whether it may be at risk of demotion or monetisation restrictions.
Meta Follows YouTube In Crackdown On Unoriginal Content. via @MattGSouthern: https://t.co/aUPFqSPvv5#smmnews #SMM #SocialMediaMarketing #SEO pic.twitter.com/FBdCauPPzp
In its most recent Transparency Report, Meta said that 3% of Facebook’s monthly active users worldwide are fake accounts, and it acted on 1 billion such profiles in the first quarter of 2025.
As the company continues to refine its approach, it is also leaning more heavily on community-based fact-checking in the US, using a model similar to X’s Community Notes, instead of relying solely on internal moderation teams.