The Social Dilemma and instrumenalization 

One of the things that I liked from the Social Dilemma is that they pointed out that the problem isn't necessarily the existence of the technology, or the algorithms, or social communication platforms in themselves... but that a major issue arises when we turn those algorithms back on to ourselves.

It was kind of a subtle point made in passing, but in my opinion this is the whole point: Instrumentalizing people through the application of algorithms on people produces bad outcomes for people.

It's not just that the algorithms are biased (they are), but simply the fact that they are being used to measure and sort people and then provide or enforce different outcomes on people based on how someone gets sorted.

I'm curious if there are any exceptions or if this instrumentalization is universally unhealthy for humans (because it is inherently dehumanizing)?


The Social Dilemma and instrumenalization 

A possible exception might be the case where the algorithmic treatment is meant to provide benefit to the recipient instead of being punitive or extractive.

My two reference points there are:

1) The work my former company did for behavioral based energy efficiency which was a net-win for the consumer by helping them save money without any punishment or extraction.

2) In the book Weapons of Math Destruction, Cathy O’Neal makes the same claim.

I wonder

Sign in to participate in the conversation

A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.