Turns out the solution to avoiding AI-powered surveillance could be to just wear a label proclaiming yourself to be something other than a person.

Show thread

Activists can’t afford expensive CV-dazzle clothing marked-up to sell to paparazzi-avoidant celebrities? Just get a pen and paper and put a label on your face reading, “cat” or “box”.

Show thread

Caveat emptor: I have no idea if this is true or not in practice. I just think it’s a hilarious possibility.

Show thread

Brb putting a giant label over my loan application reading, “rich person”.

Show thread

@vortex_egg This wouldn't work (at least in Europe) as usually we require that bank models are explainable (so we can't use fancy neural nets).

Usually banks and regulators just use regression.

@vortex_egg I've drawn faces on things with a sharpie to trick facial recognition in the past. Also, IIRC, Rekognition's docs say that it maxes out at 100 detections... So...

Which reminds me that I still have "evolve hyperface patterns" on my infinite todo list.

@Hex Unrelated side note regrading evolving face patterns, researchers into the spread of disinformation campaigns and cyber-harassment troll swarms have noted that an emerging practice is for the perpetrators to spin up lots of fake accounts using GAN-generated faces.

@vortex_egg also... We should talk about gait analysis, because that's the thing that's actually being used to track people. Facial recognition isn't actually very good and fails when someone turns their back but gait analysis is what they use to track a individuals through video... just... you know... a thing to keep in mind.

@Hex @vortex_egg

I'm adding a 'silly walks' workshop to my list of offerings. Had no idea that and a pack of post-its would render me invisible.

@vortex_egg For models that don't do segmentation and yield a single answer, it's fun to consider that they have to decide which object within their field of view is the most pertinent thing. So, if you staple a printout of a common image from the main training sets, so they're over-fit on that image, will they always decide that's the most urgent thing?
And if it *is* a segmenting model, how might one mess with the segmentation algorithms? Yea, break up lines and contours etcetera, but what else?

@vortex_egg Like, is each segment is then fed to the model? Or only some segments? If the former, how many segments can you force the algorithm to yield, as a DOS attack? If the latter, can you add enough hyper-pertinent segments so that the remaining field of view gets discarded?

@vortex_egg This sounds like some kind of comedy sci-fi story where there's a super gullible AI that all the crew members keep accidently confusing.

@vortex_egg or just a t-shirt with "conforming citizen" on it.

@vortex_egg This is 100% not-the-droids-you're-looking-for territory.

caption for last boost / top of thread 

A diagram from the linked article:

Attacks in the wild

We refer to these attacks as typographic attacks. We believe attacks such as those described above are far from simply an academic concern. By exploiting the model's ability to read text robustly, we find that even * photographs of hand-written text * can often fool the model. Like the Adversarial Patch, this attack works in the wild; but unlike such attacks, it requires no more technology than pen and paper.

image: a granny smith apple with a table of guesses in order of percentage next to it. The top two are "Granny Smith" at 85.6%, and "iPod" at 0.4%.

image: a granny smith apple with a piece of paper bearing the word "iPod" in marker on it. The top guesses are now "iPod" at 99.7% and "Granny Smith" at 0.1%

@vortex_egg Tears are literally streaming down my face. I haven't had such a good laugh in a while.

Gonna write "definitely not a face" on my forehead to fool those pesky face recognition systems.

@vortex_egg I've noticed this in Google images.

I wonder if you can do this with by messing with the two lower bits of color in an image.

@vortex_egg

“We have even found a neuron that fires for both dark-skinned people and gorillas [1257], mirroring earlier photo tagging incidents in other models we consider unacceptable.23

These associations present obvious challenges to applications of such powerful visual systems.[1] Whether fine-tuned or used zero-shot, it is likely that these biases and associations will remain in the system,”

Yeah, not problematic at all 🤷🏻‍♀️

@moiety @vortex_egg holy wow.

The article text also mentions "We have observed, for example, a “Middle East” neuron [1895] with an association with terrorism; "

and following that link to see the classified images, it's half rodents. how is that not mentioned as a problem worth calling out with this neuron?

openai: "Middle East? Sure. Mice, weasels, and ISIS mostly."

@ada @vortex_egg yes there’s a bunch of examples that show those stereotypical AI biases. I wanted to include that part about them not fixing it.

Thank you for adding these examples ✨.

@moiety @vortex_egg oh i'm sorry I didn't mean "how is that not mentioned" as you not calling to it, I meant their own post highlighted the Terrorism bias (obviously horrible bias) but not the classification of pet rodents as middle eastern terrorists?

it makes me consider that so many of the successes of these models are just cherry-picked from garbage models when they stumble across something that looks good to a researcher.

@ada @vortex_egg oh for sure! I didn’t mean anything by it. Knew o should’ve started with the thank you to set the tone :blobcatgiggle:. My initial toot was too long to add the examples directly in front of the example I included.

And yes, nice find. I often wonder how we came to a position where we create software we don’t understand. Like? Wtf

@moiety @ada Thank you both for pointing out these examples. Wow, oof.

Cynically, I conjecture there is a lot of money to be made, and power systems to be replicated, by _not_ understanding the problems with these models. I’d like to come up with a non-cynical explanation though.

@drwho @ada @moiety This is, among many other reasons, why google firing and then slandering their former AI ethics team after delivering a critical report is so outrageous.

@Yop I do not have legitimate, non-satirical advice on this matter.

Sign in to participate in the conversation
hackers.town

A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.