Thursday, December 26, 2024

Rihanna’s ‘Fenty effect’ could teach AI developers about fighting bias

Must read



When I first started buying makeup, I quickly learned the importance of skin tones and undertones. As someone with a light-medium skin tone and yellow undertones, I found that foundations that were too light and pink would leave my skin pallid and ashen. At the time, makeup shade ranges were extremely limited, and the alienation I often felt as a Chinese American growing up in Appalachia was amplified whenever a sales associate would sadly proclaim there was no foundation shade that matched me.

Only in recent years has skin tone diversity become a greater concern for cosmetics companies. The launch of Fenty Beauty by Rihanna in 2017 with 40 foundation shades revolutionized the industry in what has been dubbed the “Fenty effect,” and brands now compete to show greater skin tone inclusivity. Since then, I have personally felt how meaningful it is to be able to walk into a store and buy products off the shelf that acknowledge your existence.

Hidden skin tone bias in AI

As an AI ethics research scientist, when I first began auditing computer vision models for bias, I found myself back in the world of limited shade ranges. In computer vision, where visual information from images and videos is processed for tasks like facial recognition and verification, AI biases (disparities in how well AI performs for different groups) have been hidden by the field’s narrow understanding of skin tones. In the absence of data to measure racial bias directly, AI developers typically only consider bias along light versus dark skin tone categories. As a result, while there have been significant strides in awareness of facial recognition bias against individuals with darker skin tones, bias outside of this dichotomy is rarely considered.

The skin tone scale most commonly used by AI developers is the Fitzpatrick scale, despite the fact that it was originally developed to characterize skin tanning or burning for Caucasians. The deepest two shades were only later added to capture “brown” and “black” skin tones. The resulting scale looks similar to old-school foundation shade ranges, with only six options.

This narrow conception of bias is highly exclusionary. In one of the few studies that looked at racial bias in facial recognition technologies, the National Institute of Standards and Technology found that such technologies are biased against groups outside of this dichotomy, including East Asians, South Asians, and Indigenous Americans, but such biases are rarely checked for.

After several years of work with researchers on my team, we found that computer vision models are not only biased along light versus dark skin tones but also along red versus yellow skin hues. In fact, AI models performed less accurately for those with darker or more yellow skin tones, and these skin tones are significantly under-represented in major AI datasets. Our work introduced a two-dimensional skin tone scale to enable AI developers to identify biases along light versus dark tones and red versus yellow hues going forward. This discovery was vindicating to me, both scientifically and personally.

High-stakes AI

Like discrimination in other contexts, a pernicious feature of AI bias is the gnawing uncertainty it creates. For example, if I am stopped at the border due to a facial recognition model not being able to match my face to my passport, but the technology works well for my white colleagues, is that due to bias or just bad luck? As AI becomes increasingly pervasive in everyday life, small biases can accumulate, resulting in some people living as second-class citizens, systematically unseen or mischaracterized. This is especially concerning for high-stakes applications like facial recognition for identifying criminal suspects or pedestrian detection for self-driving cars.

While detecting AI bias against people with different skin tones is not a panacea, it is an important step forward at a time when there is a growing push to address algorithmic discrimination, as outlined in the EU AI Act and President Joe Biden’s AI executive order. Not only does this research enable more thorough audits of AI models, but it also emphasizes the importance of including diverse perspectives in AI development.

When explaining this research, I have been struck by how intuitive our two-dimensional scale seems to people who have had the experience of purchasing cosmetics—one of the rare times when you must categorize your skin tone and undertone. It saddens me to think that perhaps AI developers have relied on a narrow conception of skin tone to date because there isn’t more diversity, especially intersectional diversity, in this field. My own dual identities as an Asian American and a woman—who had experienced the challenges of skin tone representation—were what inspired me to explore this potential solution in the first place.

We have seen the impact diverse perspectives have had in the cosmetics industry thanks to Rihanna and others, so it is critical that the AI industry learn from this. Failing to do so risks creating a world where many find themselves erased or excluded by our technologies. 

Alice Xiang is a distinguished researcher, accomplished author, and governance leader who has dedicated her career to uncovering the most pernicious facets of AI—many of which are rooted in data and the AI development process. She is the Global Head of AI Ethics at Sony Group Corporation and Lead Research Scientist at Sony AI.

More must-read commentary published by Fortune:

  • Why I’m yet another woman leaving the tech industry
  • The tax code is made for tradwives. Here’s how much it punishes dual-earning couples
  • My mental health hit a low point due to a difficult pregnancy. Every employer should offer the kind of benefits package that pulled me through
  • America is debating whether to raise the retirement age—but boomers are already working well into their sixties and seventies

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article