Did WhatsApp fail us during the pandemic?

Misinformation tends to stay on WhatsApp four times longer than authentic information


Sana Jamil Khan March 08, 2021

With 2 billion active monthly users, WhatsApp is one of the most popular messaging apps in the world. Although the app provides users access to an encrypted communication gateway, in recent times WhatsApp has faced intense ciriticism from media and privacy watchdogs over its lack of effort to counter misinformation.

WhatsApp saw a 40% increase in usage during the earlier days of coronavirus lockdown as people tried to stay connected with their loved ones, reports Tech Crunch. The government-imposed restrictions forced people to become homebound and WhatsApp became the chief source of information and updates, forwarded through messages.

The more people were dependent on social media as their news source the more they trusted false information about the pandemic, according to a survey analysis by Washington State University researcher Yan Su.

“During the Covid-19 pandemic, social media has spread a lot of conspiracy theories and misinformation, which has negative consequences because many people use these false statements as evidence to consolidate their pre-existing political ideologies and attack each other,” says Su.

Taking advantage of this situation criminals released fake news such as cures for Covid-19, 5G conspiracy theories, fake audio clips of doctors, racist news suggesting black people are immune to the virus on WhatsApp.

Identifying such false news is not easy because there are paid individuals and agencies that are writing clickbait and bogus content to increase their traffic. This way what may seem like authentic information is actually made up of stories that within minutes reaches thousands of users.

Description: PHOTO: Statista

PHOTO: Statista

A global study conducted in March 2020 by market research portal Statista revealed that around 74 per cent of respondents were worried about fake news being circulated about the virus which dampens their efforts to find reliable information about Covid-19. While 85 per cent of respondents felt that they should be provided information about the virus directly from health officials or politicians.

Moreover, in Pakistan, researchers examined 227 WhatsApp groups’ messages and images for more than six weeks to analyse their response to Covid-19. The study found that while the majority of information shared on these groups may not be fabricated, misinformation tends to stay on WhatsApp four times longer than of authentic information.

In contrast, on Twitter Covid-19 misinformation is removed frequently because users can publicly refute tweets carrying false news.

Similarly, on Facebook you are able to figure out when and by whom the original content was posted, and since these platforms are not encrypted both companies can easily remove the reported post or flag it as fake news.

WhatsApp argues that since the messaging service is end-to-end encrypted the company cannot read the content being shared between users. Hence there is not much that can be done from its end to stop fake news from being circulated on its platform.

In June 2020, Carl Woog, Director of Communication of WhatsApp attended the International Fact-Checking Network’s Global Fact-Checking Summit. At the summit, Woog told Poynter that the company has been working on developing working relationships with fact-checking projects like Boom, Verificado, and Colombian outlet La Silla Vacía.

The company also began mentioning that a message has been forwarded to alert the receiver that the sender is not the original author of the message. There was also a limit set for the number of times a message can be forwarded to tone down the spread of conspiracy theories reports The Guardian.

However, WhatsApp can do much more than just adding a ‘Forwarded’ label to a message. According to a study published by Columbia Journalism Review, the Facebook-owned company reads and stores parts of metadata of every message being sent on its platform which can be used to check fake news and even detect those who are responsible for this nuisance.

Further, the app identifies each media file with a cryptographic hash. Therefore, whenever any attachment is being shared the company checks whether its server already has a file with the same cryptographic hash. If so, then instead of uploading that file it simply sends the stored copy to the user.

This suggests that not only does the company has files on its servers but also has the capability to track specific files and flag certain messages as fake news despite having end-to-end encryption.

The study further found that 40.7% and 82.2% of the shares containing misinformation in their data were made even after it was labeled as false news by the fact-checking agencies. If WhatsApp had flagged the content as false at the time it was fact-checked, it could have prevented a large fraction of shares of misinformation from occurring.

Despite facing backlash, WhatsApp recently announced that it is moving ahead with its new controversial terms of service which require users to agree to let owner Facebook and its subsidiaries to collect data, including their phone number, IP addresses, and location.

While the company insists that it still won't be able to read messages between chats, gaining access to user data means that the company can amp up its efforts to moderate content that is being circulated on its platform.

The pandemic has highlighted how grave the consequences of misinformation can be. The public no longer know what to believe. It’s been a year since media and privacy watchdogs raised concerns over false information circulating on the app, yet WhatsApp's efforts to counter misinformation on its platform remain unsatisfactory.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ