Facebook aims to remove militant content, protect Muslims

It has faced pressure both in the US and Europe to tackle such content more effectively


Tech Desk/reuters November 29, 2017
Facebook logo is seen on a wall at a start-up companies gathering at Paris' Station F in Paris, France, January 17, 2017. PHOTO: REUTERS

Facebook said on Wednesday that it was removing 99 per cent of content related to militant groups Islamic State and al Qaeda before being told of it, as it prepared for a meeting with European authorities on tackling militant content online.

Eighty-three per cent of “terror content” is removed within one hour of being uploaded, Monika Bickert, head of global policy management, and Brian Fishman, head of counter-terrorism policy at Facebook, wrote in a blog post.

The world’s largest social media network, with 2.1 billion users, has faced pressure both in the United States and Europe to tackle militant content on its platform more effectively.

Facebook won't let people delete their posts anymore

In June, Facebook said it had ramped up use of artificial intelligence, such as image matching and language understanding, to identify and remove content quickly.

“It is still early, but the results are promising, and we are hopeful that AI (artificial intelligence) will become a more important tool in the arsenal of protection and safety on the internet and on Facebook,” Bickert and Fishman wrote.

“Today, 99 per cent of the ISIS and al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site.”

The blog post comes a week before Facebook and other social media companies like Alphabet’s Google and Twitter meet with European Union governments and the EU executive to discuss how to remove militant content and hate speech online.

China's Tencent becomes more valuable than Facebook

“Deploying AI for counter-terrorism is not as simple as flipping a switch ... A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda,” Facebook said.

The European Commission in September told social media firms to find ways to remove the content faster, including through automatic detection technologies, or face possible legislation forcing them to do so.

Meanwhile, Facebook has introduced a guide that would improve the safety of Muslims online, in an effort to combat hatred, racism and hate speech that is hurled at the community.

The guide is said to contain advice on how to deal with abusive comments and content. The content shared will be restricted by the police and Facebook team.

Facebook will also be offering support to somebody who is attacked for just being a Muslim making it a better and much more safer platform for the Muslim community to use.

The official launch of this guide will be at the parliamentary reception where MPs are invited to speak to the Imams about the issues they face as a Muslim community.

“This guide has been produced to empower Muslim users on the platform with the tools, resources and knowledge to identify and deal with harmful content to keep them and their friends safe online,” says the official website of the guide.

Facebook has always flagged and eventually blocked any hate content but now this guide is to safeguard the rights of the Muslim community.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ