Anyway, large language models (LLMs, like GPT-3) are one of the actual new technologies that technology corporations are racing to get out to market so fast that they've had to sideline and censor all the pesky ethicists and scientists who keep getting in the way by pointing out the litany of actual harms caused by LLMs (discrimination and segregation, wide scale disinformation, environmental impacts of excess computation).

The upsides of LLMs to surveillance capitalism are too high to let social good get in the way of their inevitable production.

Remember that inventing effective LLMs was what caused OpenAI to change their charter from doing AI in an open-science model in the name of AI-safety, to close-sourcing their company research out of the potential for harm, and then immediately seeking VC funding so they could capitalize on its production.

Show thread
Sign in to participate in the conversation

A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.