Listen to this story

--:--

--:--

Facial Recognition Has a Blind Spot

Melanin-biased software leaves a lot of room for error

Illustration: Jordan Moss

OOnline shopping provides something retail stores usually don’t: personalization. When you log in to a store’s smartphone app or website, the algorithms remember who you are; suggest what you’re most likely to purchase; highlight past purchases; and keep track of your favorite brands, sports teams, or foods.

So imagine being able to use that same personalization inside of a retail store with facial recognition. Your face is scanned as soon as you walk by a kiosk or camera. You can skip past the retail store clerks suspiciously refolding clothes wherever you’re standing or asking repeatedly if you “need something.” Ideally you could enter a store, and the algorithms of the software already confirm you’re a repeat customer.

Last year, Cali Group (the owner of CaliBurger) piloted facial recognition software for its in-store loyalty program. Used at the National Retail Federation’s annual expo, the software could identify registered customers, activate their loyalty accounts, and display their favorite meals from the West Coast restaurant. Instead of credit cards, the restaurant was in the beginning stages of planning face-based payments.

For people of color, facial recognition could be just as stressful as the retail clerk who’s following them around a store.

Samsung and AT&T used facial recognition software at the expo too, Forbes reports. The companies used the software to calculate demographics and store traffic and to send store associates the names of incoming shoppers. Even Walmart has been working on patenting facial recognition software that can detect a shopper’s mood. Similar to Cloverleaf’s shelfPoint, the goal is to identify how customers are feeling when something catches their eye enough to buy it.

This software may help retailers to become more familiar with their customers, lower the threat of shoplifting, and bridge the gap between online and brick-and-mortar retail. But for people of color, it could potentially be just as stressful as the retail clerk who might be following them around a store.

According to the Electronic Frontier Foundation, a nonprofit organization defending civil liberties in the digital world, facial recognition software may be prone to error. It’s specifically worse when identifying people of color, women, and younger groups. If the software reports a “false negative,” it will not be able to match a person’s face at all. A “false positive” may identify this person as someone else. And for law enforcement, this can pose quite a problem. Since 2013, San Diego police officers could stop to take photographs of a person of interest to run their face through Tactical Identification System (TACIDS). That photograph could then be analyzed against more than a million other booking shots.

For White men, the software works correctly 99% of the time. For darker-skinned women, there were nearly 35% more errors.

If the person photographed just so happens to look like someone who already has a mug shot, whether the arrest was dismissed or not, this could lead to a wrongful arrest. With a database largely made up of African Americans, people of Latin descent, and immigrant populations, the odds are much more likely to find a similar face for them than non-minority groups.

Recommended Read:Stop calling black women ‘intimidating’: The ‘intimidation’ incident that made me quit my last job

For White men, the New York Times reports that the software works correctly 99% of the time. Lighter-skinned women were misidentified by gender up to 7% of the time. However, for darker-skinned men, the error rate shot up to 12%. For darker-skinned women, there were nearly 35% more errors.

One researcher at the MIT Media Lab, Joy Buolamwini, had to go as far as putting on a white mask for facial recognition software to recognize her face altogether. Buolamwini went on to create a TED Talk about the facial recognition issue and founded the Algorithmic Justice League, giving people a platform to report being the victims of facial recognition software bias.

She’s not the first in academia to notice. Charles Isbell, an executive associate dean at the Georgia Institute of Technology, has reported more than 30 years of melanin bias with artificial intelligence.

With this major flaw in almost every software, why are companies continuing to invest?

For some, it still comes down to a matter of convenience and money. In 2017, the Ladies Professional Golf Association (LPGA) used video facial recognition technology at the main entrance of its California country club. It helped members of the media be able to access a VIP section and secured areas around the LPGA event. Attendees at the golfing event may have felt safer knowing that this high level of security would be used while they celebrated the first LPGA major championship of the year.

Madison Square Garden has already used it for NBA games (New York Knicks) and NHL games (Rangers). While fans may be amused by on-the-spot zoom-ins of them watching a game or egging them on to kiss, that 360-degree gigapixel image at MSG also helps businesses calculate who’s in the crowd: women versus men, racial identification, age estimates, and even partnering opportunities for fans to promote sponsor events on their own social media.

Government officials and politicians are not all sitting quietly while retail organizations, restaurants, sports stadiums, and other organizations continue to use artificial intelligence. Some are challenging its existence until corrected. Seven Congress members sent letters to the Federal Trade Commission (FTC), Federal Bureau of Investigation (FBI), and Equal Employment Opportunity Commission (EEOC) wanting to know how artificial intelligence is influencing commerce and surveillance.

Democratic presidential candidates Kamala Harris and Elizabeth Warren, along with Senator Patty Murray, asked the EEOC to determine whether the technology violates the Civil Rights Act of 1964, the Equal Pay Act of 1963, and/or the Americans with Disabilities Act of 1990. While many have been outspoken about the issues with A.I. and people of color, this could be the first time that A.I.’s flaws could make it a hot topic during the upcoming presidential debates.

Sign up for Shamontiel’s Weekly Newsletter!
Sign up for Shamontiel’s Weekly Newsletter!

Would you like to receive Shamontiel’s Weekly Newsletter via MailChimp? Sign up today!

Check out her six Medium pubs: BlackTechLogy, Doggone World, Homegrown, I Do See Color, Tickled and We Need to Talk. Visit Shamontiel.com to read about her.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store