First Ever AI-Judged Beauty Contest Proves Robots Are Racist

by
editors
Over 60,000 people from all over the world participated in an international beauty contest judged by artificial intelligence, but the results were very disappointing.

 Beauty.AI

Some humans might be racists but apparently robots are too. The results of an international beauty contest, judged entirely by artificial intelligence, certainly indicates so.

Beauty.AI, created by Youth Laboratories and supported by Microsoft, attempts to select winners from submitted photos by analyzing specific traits that contribute towards perceived outer beauty. The algorithm would look at physical traits like wrinkles, pimples, blemishes, face symmetry and perceived age to identify the most attractive candidates.

The project’s first contest, with more than 60,000 applicants from over 100 countries including India and Africa, just came to an end but the results were highly disappointing. Out of 44 winners, a very few were Asians and only one was dark-skinned. All the others were white.

Beauty Contest

Although they are several reasons why the algorithm preferred white people, the primary problem was the data that the programmers used to create standards of beauty did not include many minorities, said Beauty.AI’s chief science officer Alex Zhavoronkov. While the artificial intelligence was not built to treat white skin as a sign of beauty, the underrepresentation of minorities in datasets can make the algorithm reach inaccurate conclusions because it has learnt from analyzing pictures of white people.

Read More: Internet Trolls Turn Microsoft’s AI Bot Into A Hitler Loving Racist

The biggest problem is that humans who create these programs have their own inherent prejudices, so despite the beliefs that algorithms are objective, they can reflect their programmers’ preexisting biases.

“Humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing.”

The debacle has sparked controversy about the ways algorithms can perpetuate prejudice which can often yield offensive results. These results can have a devastating impact on people of color.

Civil rights organizations have raised concerns about algorithm-based law enforcement predictive tools which use data to forecast where crimes will occur in the future. Reliance on flawed statistics can result in racial intolerance and fuel harmful law and order practices.

“That’s truly a matter of somebody’s life is at stake,” said Sorelle Friedler, a professor of computer science at Haverford College.

Read More: Law Firm Hires First Artificially Intelligent ‘Attorney’
Carbonated.TV