YouTube AI blocks chess channel after mistaking 'black v white' discussion as racism

Radic suspects that the account may have been blocked because he referred to the chess game as 'Black against White'


Reuters/Tech Desk February 22, 2021

The world’s most popular YouTube chess channel was blocked after artificial algorithms set up to detect racist content and hate speech mistook discussion about black and white chess pieces as racism, reports Independent UK.

On June 28, 2020, Croatian chess player Antonio Radic's YouTube chess channel, with more than 1 million subscribers was blocked during a chess show with Grandmaster Hikaru Nakamura.

He received no explanation from the video platform.

Google fires second AI ethics leader as dispute over research, diversity grows

Radic’s channel was restored 24 hours later. He suspects that the account may have been blocked because he referred to the chess game as “Black against White”.

YouTube relies on both humans and AI algorithms, which means the AI system could make an error if it is not trained correctly to interpret context.

“If they rely on artificial intelligence to detect racist language, this kind of accident can happen,” said Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute.

KhudaBukhs tested this theory by using the best speech classifier that’s available to screen 680,000 comments gathered from five popular chess-focused YouTube channels.

Facebook faces new UK class action after data harvesting scandal

After manually reviewing 1,000 comments, he found that 82 per cent of them had been wrongly categorized by AI as hate speech because the comments used words like “black”, “white”, “attack” and “threat”.

YouTube, Facebook, and Twitter warned last year that videos and content may be erroneously removed for policy violations, as the companies rely on automated takedown software during the coronavirus pandemic.

In a blog post, Google said that to reduce the need for people to come into offices, YouTube and other business divisions are temporarily relying more on artificial intelligence and automated tools to find problematic content.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ