Google is Improving Skin Tone Representation Across all Products

Submit

Having the color of your skin represents who you are is one of the things that most people experience and while it might not sound like something that is important, it is safe to say that a lot of people feel excluded because of their skin tone. Often time, the cameras that capture the images do not capture the skin tones properly and Google is looking to change that. Last year, Google announced Real Tone for Pixel which was just one example of Google's efforts.

Today, Google has decided to introduce a new step in its commitment to image equity and to improve the representation across all its products. Google has partnered with Harvard profession and sociologist Dr. Ellis Monk and the company is releasing a new skin tone scale that is designed to be more inclusive of the spectrum of skin tones that we see in our daily life.

Galaxy Watch 4 is Finally Getting Google Assistant

The Monk Skin Tone Scale Will Revolutionize How Different Skintones are Represented Thanks to Google

This is what the scale looks like and it was designed to have an easy-to-use for the development and evaluation of technology.

Google is calling it the Monk Screen Tone Scale and you can look at it below.

This is what Google has to say about Monk Skin Tone Scale.

Updating our approach to skin tone can help us better understand representation in imagery, as well as evaluate whether a product or feature works well across a range of skin tones. This is especially important for computer vision, a type of AI that allows computers to see and understand images. When not built and tested intentionally to include a broad range of skin-tones, computer vision systems have been found to not perform as well for people with darker skin.

The MST Scale will help us and the tech industry at large build more representative datasets so we can train and evaluate AI models for fairness, resulting in features and products that work better for everyone — of all skin tones. For example, we use the scale to evaluate and improve the models that detect faces in images.

You can read more about it here.

Submit