http://sandlab.cs.uchicago.edu/fawkes/
I don’t really understand what’s going on but we should track this here.
http://sandlab.cs.uchicago.edu/fawkes/
I don’t really understand what’s going on but we should track this here.
I’m gonna quote more of this, but I’ve got to lead with not any real information, just a funny thing that made me smile, the following text taken out of context:
One fundamental difference is that these approaches can only protect a user when the user is wearing the sweater/hat/placard. Even if users were comfortable wearing these unusual objects in their daily lives, these mechanisms are model-specific, that is, they are specially encoded to prevent detection against a single specific model (in most cases, it is the YOLO model).
OMG, “(in most cases, it is the YOLO model)” is like the mantra of the times!
Alright, what is this? Well, I think we ought to let an over-enthusiastic student and/or advisor explain it:
The SAND Lab at University of Chicago has developed Fawkes , an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like , by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking . You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.
Emphasis, theirs. Whaaaat? Someone got picked on by facial recognition software as a child…
Okay, I’ve thought about it for 15 minutes, plan b: move off the grid, computers are evil, Ned Ludd was a collective.
Because I’m not going to do this to my photos, it is already very, very hard to store babby photos, I’m not gonna trip on nation states recognizing everyone, I’m just gonna insist we don’t do that where I live.
Oh, um… ahem.
Yes, we should be sharing resources concerning facial recognition and preventative actions, here in Digital Safety. Folks should not dismiss it out of hand, based solely on feelings about babby photos.
…