Whistleblower says Facebook generating terror content
It carried out a five-month study of the pages of 3,000 members proscribed as terrorist by the US government
SAN FRANCISCO:
Facebook is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognise as extremist, according to a complaint made public on Thursday.
The National Whistleblower Center (NWC) in Washington carried out a five-month study of the pages of 3,000 members who liked or connected to organisations proscribed as terrorist by the US government.
Researchers found that the Islamic State group and al Qaeda were "openly" active on the social network.
Facebook paid teens to track smartphone use
More worryingly, Facebook's own software was automatically creating "celebration" and "memories" videos for extremist pages that had amassed sufficient views or "likes."
The NWC said it filed a complaint with the US Securities and Exchange Commission (SEC) on behalf of a source that preferred to remain anonymous.
"Facebook's efforts to stamp out terror content have been weak and ineffectual," read an executive summary of the 48-page document shared by the center.
"Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology."
Survey results shared in the complaint indicated that Facebook was not delivering on its claims about eliminating extremist posts or accounts.
The company told AFP it had been removing terror-linked content "at a far higher success rate than even two years go" since making heavy investments in technology.
"We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," the company said.
Facebook hires State Department lawyer as general counsel
Facebook and other social media platforms have been under fire for not doing enough to curb messages of hate and violence, while at the same time criticised for failing to offer equal time for all viewpoints, no matter how unpleasant.
Facebook in March announced bans at the social network and Instagram on praise or support for white nationalism and white separatism.
Facebook’s automation seems to have a major problem as its algorithms is generating extremist content by default. The auto-generated content contains videos, pictures and pages which is related to white supremacy and al Qaeda.
Facebook is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognise as extremist, according to a complaint made public on Thursday.
The National Whistleblower Center (NWC) in Washington carried out a five-month study of the pages of 3,000 members who liked or connected to organisations proscribed as terrorist by the US government.
Researchers found that the Islamic State group and al Qaeda were "openly" active on the social network.
Facebook paid teens to track smartphone use
More worryingly, Facebook's own software was automatically creating "celebration" and "memories" videos for extremist pages that had amassed sufficient views or "likes."
The NWC said it filed a complaint with the US Securities and Exchange Commission (SEC) on behalf of a source that preferred to remain anonymous.
"Facebook's efforts to stamp out terror content have been weak and ineffectual," read an executive summary of the 48-page document shared by the center.
"Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology."
Survey results shared in the complaint indicated that Facebook was not delivering on its claims about eliminating extremist posts or accounts.
The company told AFP it had been removing terror-linked content "at a far higher success rate than even two years go" since making heavy investments in technology.
"We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," the company said.
Facebook hires State Department lawyer as general counsel
Facebook and other social media platforms have been under fire for not doing enough to curb messages of hate and violence, while at the same time criticised for failing to offer equal time for all viewpoints, no matter how unpleasant.
Facebook in March announced bans at the social network and Instagram on praise or support for white nationalism and white separatism.
Facebook’s automation seems to have a major problem as its algorithms is generating extremist content by default. The auto-generated content contains videos, pictures and pages which is related to white supremacy and al Qaeda.