Are social media platforms ready to counter misinformation?
From deepfake videos of Indonesia’s presidential contenders to online hate speech directed at India’s Muslims, social media misinformation has been rising ahead of a bumper election year, and experts say tech platforms are not ready for the challenge.
Voters in Pakistan, Indonesia and India go to the polls this year as more than 50 nations hold elections, including the United States where former president Donald Trump is looking to make a comeback. In Bangladesh, Prime Minister Sheikh Hasina was sworn in for a fifth term last Thursday after a landslide victory in an election boycotted by the opposition.
Misinformation on social media has had devastating consequences ahead of, and after, previous elections in many of the nations where voters are going to the polls this year.
In Pakistan, where a national vote is scheduled for Feb 8, hate speech and misinformation was rife on social media ahead of a 2018 general election, which was marred by a series of bombings that killed scores across the country.
In Indonesia, which votes on Feb 14, hoaxes and calls for violence on social media networks spiked after the 2019 election result. At least six people were killed in subsequent unrest.
Despite the high stakes and evidence from previous polls of how fake online content can influence voters, digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinformation and hate speech. Recent layoffs at big tech firms, new laws to police online content that have tied up moderators, and artificial intelligence (AI) tools that make it easier to spread misinformation could hurt poorer countries more, said Sabhanaz Rashid Diya, an expert in platform safety.
"Things have actually gotten worse since the last election cycle for many countries: the actors who abuse the platforms have gotten more sophisticated but the resources to tackle them haven't increased," said Diya, founder of Tech Global Institute.
Read also: Musk's X disabled feature for reporting electoral misinformation
"Because of the mass layoffs, priorities have shifted. Added to that is the large volume of new regulations ... platforms have to comply, so they don't have resources to proactively address the broader content ecosystem (and) the election integrity ecosystem," she told the Thomson Reuters Foundation.
"That will disproportionately impact the Global South," which generally gets fewer resources from tech firms, she said. As generative AI tools, such as Midjourney, Stable Diffusion and DALL-E, make it cheap and easy to create convincing deepfakes, concern is growing about how such material could be used to mislead or confuse voters.
AI-generated deepfakes have already been used to deceive voters from New Zealand to Argentina and the United States, and authorities are scrambling to keep up with the tech even as they pledge to crack down on misinformation. The European Union - where elections for the European parliament will take place in June - requires tech firms to clearly label political advertising and say who paid for it, while India's IT Rules "explicitly prohibit the dissemination of misinformation", the Ministry of Electronics and Information Technology noted last month.
Alphabet's Google has said it plans to attach labels to AI-generated content and political ads that use digitally altered material on its platforms, including on YouTube, and also limit election queries its Bard chatbot and AI-based search can answer.
YouTube's "elections-focused teams are monitoring real-time developments ... including by detecting and monitoring trends in risky forms of content and addressing them appropriately before they become larger issues," a spokesperson for YouTube said.
Facebook's owner Meta Platforms - which also owns WhatsApp and Instagram - has said it will bar political campaigns and advertisers from using its generative AI products in advertisements. Meta has a "comprehensive strategy in place for elections, which includes detecting and removing hate speech and content that incites violence, reducing the spread of misinformation, making political advertising more transparent (and) partnering with authorities to action content that violates local law," a spokesperson said.
Read: AI being used for hacking and misinformation, top Canadian cyber official says
X, formerly known as Twitter, did not respond to a request for comment on its measures to tackle election-related misinformation. TikTok, which is banned in India, also did not respond.
While social media firms have developed advanced algorithms to tackle misinformation and disinformation, "the effectiveness of these tools can be limited by local nuances and the intricacies of languages other than English," said Nuurrianti Jalli, an assistant professor at Oklahoma State University.
In addition, the critical US election and global events such as the Israel-Hamas conflict and the Russia-Ukraine war could "sap resources and focus that might otherwise be dedicated to preparing for elections in other locales," she added.
In the past year, Meta, X and Alphabet have rolled back at least 17 major policies designed to curb hate speech and misinformation, and laid off more than 40,000 people, including teams that maintained platform integrity, the US non-profit Free Press said in a December report.
"With dozens of national elections happening around the world in 2024, platform-integrity commitments are more important than ever. However, major social media companies are not remotely prepared for the upcoming election cycle," civil rights lawyer Nora Benavidez wrote in the report.
“Without the policies and teams they need to moderate violative content, platforms risk amplifying confusion, discouraging voter engagement and creating opportunities for network manipulation to erode democratic institutions."