Turns out the solution to avoiding AI-powered surveillance could be to just wear a label proclaiming yourself to be something other than a person.
Activists can’t afford expensive CV-dazzle clothing marked-up to sell to paparazzi-avoidant celebrities? Just get a pen and paper and put a label on your face reading, “cat” or “box”.
Caveat emptor: I have no idea if this is true or not in practice. I just think it’s a hilarious possibility.
Brb putting a giant label over my loan application reading, “rich person”.
@vortex_egg This wouldn't work (at least in Europe) as usually we require that bank models are explainable (so we can't use fancy neural nets).
Usually banks and regulators just use regression.
@vortex_egg I've drawn faces on things with a sharpie to trick facial recognition in the past. Also, IIRC, Rekognition's docs say that it maxes out at 100 detections... So...
Which reminds me that I still have "evolve hyperface patterns" on my infinite todo list.
@Hex Unrelated side note regrading evolving face patterns, researchers into the spread of disinformation campaigns and cyber-harassment troll swarms have noted that an emerging practice is for the perpetrators to spin up lots of fake accounts using GAN-generated faces.
@vortex_egg also... We should talk about gait analysis, because that's the thing that's actually being used to track people. Facial recognition isn't actually very good and fails when someone turns their back but gait analysis is what they use to track a individuals through video... just... you know... a thing to keep in mind.
@vortex_egg literally just wear a box
@vortex_egg For models that don't do segmentation and yield a single answer, it's fun to consider that they have to decide which object within their field of view is the most pertinent thing. So, if you staple a printout of a common image from the main training sets, so they're over-fit on that image, will they always decide that's the most urgent thing?
And if it *is* a segmenting model, how might one mess with the segmentation algorithms? Yea, break up lines and contours etcetera, but what else?
@vortex_egg Like, is each segment is then fed to the model? Or only some segments? If the former, how many segments can you force the algorithm to yield, as a DOS attack? If the latter, can you add enough hyper-pertinent segments so that the remaining field of view gets discarded?
@vortex_egg “I am a meat popsicle”
@vortex_egg This sounds like some kind of comedy sci-fi story where there's a super gullible AI that all the crew members keep accidently confusing.
@vortex_egg or just a t-shirt with "conforming citizen" on it.
@vortex_egg This is 100% not-the-droids-you're-looking-for territory.
caption for last boost / top of thread
A diagram from the linked article:
Attacks in the wild
We refer to these attacks as typographic attacks. We believe attacks such as those described above are far from simply an academic concern. By exploiting the model's ability to read text robustly, we find that even * photographs of hand-written text * can often fool the model. Like the Adversarial Patch, this attack works in the wild; but unlike such attacks, it requires no more technology than pen and paper.
image: a granny smith apple with a table of guesses in order of percentage next to it. The top two are "Granny Smith" at 85.6%, and "iPod" at 0.4%.
image: a granny smith apple with a piece of paper bearing the word "iPod" in marker on it. The top guesses are now "iPod" at 99.7% and "Granny Smith" at 0.1%
@vortex_egg Tears are literally streaming down my face. I haven't had such a good laugh in a while.
@vortex_egg I've noticed this in Google images.
I wonder if you can do this with by messing with the two lower bits of color in an image.
“We have even found a neuron that fires for both dark-skinned people and gorillas , mirroring earlier photo tagging incidents in other models we consider unacceptable.23
These associations present obvious challenges to applications of such powerful visual systems. Whether fine-tuned or used zero-shot, it is likely that these biases and associations will remain in the system,”
Yeah, not problematic at all 🤷🏻♀️
The article text also mentions "We have observed, for example, a “Middle East” neuron  with an association with terrorism; "
and following that link to see the classified images, it's half rodents. how is that not mentioned as a problem worth calling out with this neuron?
openai: "Middle East? Sure. Mice, weasels, and ISIS mostly."
@moiety @vortex_egg oh i'm sorry I didn't mean "how is that not mentioned" as you not calling to it, I meant their own post highlighted the Terrorism bias (obviously horrible bias) but not the classification of pet rodents as middle eastern terrorists?
it makes me consider that so many of the successes of these models are just cherry-picked from garbage models when they stumble across something that looks good to a researcher.
@ada @vortex_egg oh for sure! I didn’t mean anything by it. Knew o should’ve started with the thank you to set the tone . My initial toot was too long to add the examples directly in front of the example I included.
And yes, nice find. I often wonder how we came to a position where we create software we don’t understand. Like? Wtf
@vortex_egg I like that I found this lil dude in the article
@vortex_egg People: Oh noes, robots are gonna kill us all!
@vortex_egg (smiles in magick)
@vortex_egg can we fool facial recognition this way?
@Yop I do not have legitimate, non-satirical advice on this matter.
A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.