A beauty contest was judged by AI and the robots picked white winners

The first international beauty contest decided by an algorithm has sparked controversy


News Desk September 12, 2016
One expert says the results offer ‘the perfect illustration of the problem’ with machine bias. PHOTO: REUTERS

The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. However, the results seemed to show that the colour of the contestants’ skins also determined who won.

“If you have not that many people of color within the dataset, then you might actually have biased results,” said Zhavoronkov, Beauty.AI’s chief science officer. “When you’re training an algorithm to recognise certain patterns … you might not have enough data, or the data might be biased.”

India’s monsoon forecasting to get high-tech makeover

This means that since Beauty.AI, which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft, did not include enough minorities in its data set used to establish standards of attractiveness, the algorithm favored white people.

“I was more surprised about how the algorithm chose the most beautiful people. Of a very large number, they chose people who I may not have selected myself,” Zhavoronkov added.

After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty.”  But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. Although the majority of contestants were white, many people of colour submitted photos, including large groups from India and Africa.

Technology Giant: Apple pushes expansion plans in India 

 

 

Winners of the Beauty.AI contest in the category for women aged 18-29. PHOTO: http://winners2.beauty.ai/#win

The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.

The Beauty.AI results offer “the perfect illustration of the problem”, said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing”, which has increasingly relied on machines. “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.

You can now 3D-print pizza in less than four minutes

According to Sorelle Friedler, a professor of computer science at Haverford College, a major problem is that minority groups by nature are often underrepresented in datasets, which means algorithms can reach inaccurate conclusions for those populations and the creators won’t detect it. For example, she said, an algorithm that was biased against Native Americans could be considered a success given that they are only 2% of the population.

Zhavoronkov added that when Beauty.AI launches another contest round this fall, he expects the algorithm will have a number of changes designed to weed out discriminatory results. “We will try to correct it.”  
 This article originally appeared on The Guardian.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ