How Businesses Can Keep Facial Recognition From Being Biased
For how quickly facial recognition technology was put to use, the controversy was soon to follow.
As early use cases have been primarily for security and law enforcement, the biggest concern wasn’t necessarily being put under a watchful eye, but that this eye could potentially hold a huge bias. After all, being falsely accused of a crime because of an error by a computer isn’t a future any of us want, which is why eliminating bias now is so crucial. However, that’s why we’re giving you a rundown of not only the industry works, but how it can possibly start working towards building a brighter, safer, and fair future for everyone. Check it out below:
The Basics
For most businesses, facial recognition is somewhat of a fringe topic but could become very prevalent within the next few years. Although everyday practical uses might include Face ID for Apple Pay purchases, the majority of facial recognition customers are large-scale operations, which could very well become a $15.1 billion industry by 2025. But let’s first discuss how face recognition works to understand how bias plays a role.
On a practical level, facial recognition is the practice of taking an image or video and being able to identify what face is in it–which sounds simple enough. The majority of how this is done, however, is by utilizing algorithms, which uses a biometric artificial intelligence application, which essentially means a computer applies a bunch of different data points to your face, which is then able to recognize you again based on those unique data points. It’s a technology practically all of us are pretty well accustomed to, but the actual practice of bias comes from some major infrastructure issues that have held the software back.
Where Bias Comes From
Although to some it seems probable that these algorithms could hold bias, early data suggests otherwise. According to the New York Times, in a study conducted by MIT’s Media Lab, 35 percent of darker-skinned females amongst a set of 271 photos were misidentified. Furthermore, other results from MIT’s survey have been credited to proving bias primarily against people of color, which can be a dangerous weapon for law enforcement. As the need for accuracy here is imperative, finding and correcting this bias is a big reason why so many people don’t quite want to pull the trigger on facial recognition yet.
Despite how easy it would be to blame the computer, major tech companies developing facial recognition software understand that to have customers like federal agents and police agencies, the accuracy needs to be top-tier. As people’s freedom is at stake, for a facial recognition report to stand up in a court of law has to come down to a universal standard across the board. Finding that point thus far has been a difficult task, often ending up with racial or gender bias, which some companies have been learning how to combat.
How People Are Combating It
The biggest drivers to fighting bias in facial recognition have come from those in the AI, big data, or facial recognition industries, which has come with mixed results. Two of the biggest solutions thus far has been changing the algorithm (an attempt Google made in 2015 to their software after a PR nightmare), as well as implementing better and better datasets. According to The Verge, the latter of implementing new datasets proved successful for IBM, which says quite a bit about where the industry stands. This begs the question: why wasn’t this done before?
Of course, building AI algorithms for facial recognition software isn’t exactly a cakewalk, but it should be noted that improving datasets should be a continual practice. Quite simply, it isn’t enough to test small bits of data and then aiming to sell the product to national organizations without it being perfected; and while some could make the argument the algorithms are always in a state of improvement, they also are serving public entities like law enforcement and security. Those are crucial to getting facial recognition used across the board; but until that happens, it might be wise for most businesses to start looking at how facial recognition can help make things more local.
Taking Things To A Personal Perspective
While companies like Amazon and Google will continue racing over who will be the leader in big data for facial recognition, there’s still a local-level to how facial recognition can play a role in a lot of small businesses. For example, over 50 billion photos have been shared on Instagram, which goes to show how massive a dataset for photos it is. For most businesses, being able to aggregate the facial expressions of their Instagram followers would be huge in creating a better customer experience while also eliminating bias. Furthermore, it could be argued that for social media companies to provide access to these datasets can establish a wider pool of candidates, and thus, increase recognition amongst more diverse groups. However, there’s been a lot of debate over how much we want facial recognition to be used in social media, especially regarding what we consent to for identification.
Granted, while a lot of the technology in development for anything remotely close to that is a few years away (both in terms of legality, as well as execution); that’s not to say facial recognition can’t be a useful tool for small businesses today. With how much this technology has already helped the safety and security industry, many are predicting the sample practices of facial awareness are going to be adopted heavily by customer loyalty, which will focus more on repeat customers (leaving little room for bias). As this industry grows, so will the number of participants in facial recognition databases, and thus, reducing bias from aggregates of data…one picture at a time.
What are some ways you think facial recognition companies will reduce bias? Comment with your insights below!