Amid growing concern over privacy and flawed facial recognition models, more than 100,000 people have downloaded a free photo editing tool that “cloaks” people’s photos, protecting their identities from illegal data miners.
In mid-July, researchers from the University of Chicago unveiled the free code behind Fawkes, an editing tool that tricks facial recognition systems into seeing someone who isn’t there in photos. The product went viral on Hacker News, and the researchers are planning to talk to a browser company next week to see if Fawkes could be integrated into its product, said Emily Wenger, a third-year Ph.D. student who helped develop the software.
Wenger believes everyone should be using Fawkes to protect their images from companies like Clearview AI, which was revealed in January to have trained its facial recognition model on billions of illegally obtained photos.
“We think it’s really important for individuals to have the ability to fight back against these intrusive technology companies that are kind of capitalizing on data sharing and stealing,” she told Built In.
Unlike adversarial design — like the glasses from Chicago-based Reflectacles, which make their wearers hard for facial recognition algorithms to identify — Fawkes distorts photos so that algorithms misidentify people. By mixing a small percentage of an image’s pixels, Fawkes can make neural networks confuse, say, Mark Zuckerberg and Sheryl Sandberg.
This means that when and if data gets illegally scraped from a Fawkes-protected photo, it’s riddled with errors, but appears reliable.
Over time, Wenger said using a tool like Fawkes could help individuals erase themselves from models like Clearview AI.
But in an interview with the New York Times, Clearview AI CEO Hoan Ton-That said Fawkes did not interfere with the New York City-based company’s algorithm and that, even if it did disrupt the system, “it’s almost certainly too late to perfect a technology like Fawkes and deploy it at scale.”
Wenger, for her part, said that companies like Clearview would have no way of knowing if their model was trained on tainted data. In testing the software, she said Fawkes was found “nearly 100 percent” effective against facial recognition models deployed by Amazon, Microsoft and IBM, all of which announced earlier this year that they were — at least temporarily — suspending their development of facial recognition tech, citing issues of algorithmic bias, particularly against people of color.
Wenger said she was glad to see recognition at a corporate level that facial recognition needs greater consideration, although “there’s a lot of other small companies waiting to fill the void” the tech giants left.
She said she had a hard time thinking of instances where the use of facial recognition was justified, and pointed to instances where the technology has done harm. In June, for instance, police wrongfully arrested a Black man after a flawed facial recognition system accused him of stealing a watch.
“People need to be mindful of the fact that these algorithms are super imperfect,” Wenger said. “Just because they’re being adopted at a national level, like in airports or at borders, doesn’t mean they’re good.”
Ideally, she said social media companies and web browsers will integrate a product like Fawkes into their platform, as a means to protect users’ privacy.
“People are finally realizing that maybe this technology is maybe a bad idea,” Wenger said. “They’re realizing that it has more negative consequences than we initially anticipated.”