From mega to MAGA corporation
Meta, under Mark Zuckerberg’s leadership, is making a controversial shift that threatens the integrity of its platforms—Facebook, Instagram, and Threads—if there ever was such a thing. In a bid to increase user engagement and advertising revenue, Zuckerberg has decided to eliminate the company’s US fact-checking teams and reduce its efforts to moderate misinformation, all under the guise of prioritising free speech. With this move, Meta seeks to capitalise on the massive volume of content that drives user engagement, even if that content is false, radical, or inflammatory.
Engagement, which tracks the time users spend interacting with content, has always been Meta’s primary focus. No matter what other reasons may be cited, a mega corporation’s goal is always revenue maximisation. The more time users spend on the platform, the more advertisements they see, and the higher Meta’s revenue.
This decision, “coincidentally,” also aligns with Donald Trump’s second term as President of the United States. It seems that, in order to appease a political faction vocal about social media’s role in spreading misinformation, Meta is fully willing to go from a mega to a MAGA corporation.
Critics argue that with Zuckerberg’s decision to move from fact-checking to the community notes system, he is willing to sacrifice the truth to curry favour with the next administration and increase his platforms’ profitability. It mirrors the same decision taken by Elon Musk on X (formerly Twitter) in 2022, as well as similar principles that have led platforms like X to relax content moderation policies, resulting in increased misinformation.
By eliminating fact-checking and reducing the ability to moderate disinformation, Zuckerberg is essentially embracing the financial incentives that come with misinformation. Without fact-checkers to flag false claims or warn users about inaccurate content, posts spreading conspiracy theories, racist content, and hate speech will flourish. In Zuckerberg's view, these types of posts—designed to provoke emotional reactions—are more likely to generate engagement, regardless of their veracity. Facebook and Instagram’s algorithms reward this emotional engagement, boosting posts that elicit strong reactions, whether positive or negative. This creates a feedback loop where misleading or inflammatory content continues to dominate, as users interact with it more.
Dr Cody Buntain, a social media researcher at the University of Maryland, argues that without fact-checking, Meta’s platforms will become increasingly hyper-partisan and hostile, as the lack of moderation will allow extreme voices to dominate. The platforms will cater to the interests of users who are already more extreme in their views, further deepening societal divides. As false information spreads, it may lead to more intense polarisation and even real-world harm, as seen in the past with events like the Capitol insurrection or mass violence fuelled by online disinformation.
Meta’s policy change is unsurprisingly consistent with its history of prioritising engagement at all costs. In a leaked email from 2016, Meta’s Vice President Andrew Bosworth argued that the company should not be concerned with the potential negative consequences of its platform, such as suicide or terrorism, as long as its efforts resulted in the “de facto” benefits of user connection. Although Zuckerberg distanced himself from these views, Bosworth’s promotion to Chief Technology Officer in 2022 suggests that Meta’s pursuit of profit through engagement remains unchanged.
Ultimately, the move to end fact-checking is emblematic of a larger trend in the tech industry, where platforms are more concerned with the quantity of user engagement than the quality of information.
Zuckerberg and Meta are betting that the financial rewards of increased user interaction will outweigh the societal consequences of spreading misinformation, which is unfortunately the most likely outcome of such an ethically void policy change.
This shift could lead to further erosion of trust in social media and exacerbate the challenges of combating false information online. As such, the plague of disinformation we already face today is only going to worsen. As platforms like Meta give more power to algorithms, which reward sensational and divisive content, the truth becomes increasingly difficult to discern, leading to a toxic online environment where engagement trumps accuracy, pun intended.
The elimination of fact-checking at Meta is likely to accelerate the spread of disinformation and radical views, resulting in a more polarised and volatile online landscape. Whether this is a short-term strategy to boost profits or a long-term shift in the company’s ethos remains to be seen, but one thing is clear: Zuckerberg’s pursuit of engagement over ethics could have far-reaching consequences for the future of social media and democracy itself.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ