A team of researchers at the University of Chicago have developed an algorithm that makes tiny, imperceptible edits to your images in order to mask you from facial recognition technology. Their invention is called Fawkes
, and anybody can use it on their own images for free.
The algorithm was created by researchers in the SAND Lab
at the University of Chicago, and the open-source
software tool that they built is free to download and use on your computer at home.
The program works by making "tiny, pixel-level changes that are invisible to the human eye," but that nevertheless prevent facial recognition algorithms from categorizing you correctly. It's not so much that it makes you impossible to categorize; it's that the algorithm will categorize you as a different person entirely. The team calls the result "cloaked" photos, and they can be used like any other:
You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo.
The only difference is that a company like the infamous startup Clearview AI
can't use them to build an accurate database that will make you trackable.
Here's a before-and-after that the team created to show the cloaking at work. On the left is the original image, on the right a "cloaked" version. The differences are noticeable if you look closely, but they look like the result of dodging and burning rather than actual alterations that might change the way you look:
You can watch an explanation and demonstration of Fawkes by co-lead authors Emily Wenger and Shawn Shan below:
According to the team, Fawkes has proven 100% effective
against state-of-the-art facial recognition models. Of course, this won't make facial recognition models obsolete overnight, byt if technology like this caught on as "standard" when, say, uploading an image to social media, it would make maintaining accurate models much more cumbersome and expensive.
"Fawkes is designed to significantly raise the costs of building and maintaining accurate models for large-scale facial recognition," explains the team. "If we can reduce the accuracy of these models to make them untrustable, or force the model's owners to pay significant per-person costs to maintain accuracy, then we would have largely succeeded."
To learn more about this technology, or if you want to download Version 0.3 and try it on your own photos, head over to the Fawkes webpage
. The team will be (virtually) presenting their technical paper at the upcoming USENIX Security Symposium running from August 12th to the 14th.
posted by pod_feeder_v2