Clearview AI first made headlines early last year when a New York Times investigation uncovered how this unregulated facial recognition application was able to identify virtually anyone walking down the street with a single photo. This investigation raised concerns over privacy since this application scraped photos from literally millions of websites to compile a giant facial recognition database.
Clearview AI made the news again last month when its CEO announced that their usage spiked by 26% after the Capitol attack on January 6th, with some local police departments using the technology to send the identities of suspects to the FBI.
Besides the massive privacy concerns about a facial recognition database of this size in the hands of law enforcement (or anyone else for that matter), there are also major concerns surrounding the normalization of facial recognition by law enforcement in general. Specifically, there are concerns about the potential to misuse facial recognition technologies against Black and Brown communities.
Nathan Freed Wessley from the ACLU’s Speech, Privacy, and Technology Project says, “We know who it will be used against most: members of Black and Brown communities who already suffer under a racist criminal enforcement system.”
Compounded with this is the fact that many facial recognition systems are biased against people with darker skin and women. These groups are more likely to be misidentified by facial recognition systems, which can lead to false accusations and drastic repercussions when this technology is in the hands of law enforcement groups.
Further Reading
The featured article mainly focuses on Clearview AI usage after the attack on the Capitol. If you’re interested in learning more about biases in facial recognition technology, this article from MIT News is a great place to start.