News Categories

Microsoft updates its facial recognition tech to perform better across skin tones and genders

By Ng Chong Seng - on 27 Jun 2018, 9:30am

Microsoft updates its facial recognition tech to perform better across skin tones and genders

(Image source: Microsoft.)

Microsoft announced that it has updated its facial recognition technology and it's now better at recognizing gender across skin tones.

According to Microsoft, the improvements have enabled its system to reduce the error rates for men and women with darker skin by up to 20 times. And for all women, the company saw error rates reduced by nine times.

What’s to blame for the errors in the first place? In a nutshell: insufficient and poor data.


The higher error rates on females with darker skin highlights an industrywide challenge: Artificial intelligence technologies are only as good as the data used to train them. If a facial recognition system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewelry and eyewear.

To improve the tech, the Face API team at Microsoft has been working with experts on bias and fairness across Microsoft. Three major changes were made since: datasets used for training and benchmarking have been expanded and revised, new data collection efforts were launched to further improve the training data, and the gender classifier system has been improved to generate better results.

While Microsoft says improving the performance of the gender classifier in its Face API is mainly a technical challenge, it also warns of a more nuanced challenge, which is how to separate our own human biases from the AI technology we create.

Hanna Wallach, a senior researcher in Microsoft’s New York research lab:

If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.

To that end, Wallach’s team is developing best practices for the detection and mitigation of bias and unfairness along the entire development pipeline of Microsoft’s AI-powered products and services, from idea creation and data collection to model training, deployment, and monitoring.

Source: Microsoft.