ST:Disco Show more
An AI that wants to collect ALL DATA.
I am still not convinced that this isn't the timeline we are in now.
An AI, properly built, would do everything to collect all data(capital) and no human would notice under the right circumstances, until it was becoming a problem.
All I'm saying is... Kurzweil went to Google(12/12), and then g started making decisions to compromise their "do no evil" pledge in order to collect more data(capital).
The singularity may have already occurred. How would you know?
The future is here, it's just not very evenly distributed.
re: ST:Disco Show more
@thegibson that is a good question.
In general it could be hard to tell. In this instance tho, I have reasons to believe that this kind of action necessarily involves culpable human decision making.
Main thing is we haven't seen evidence of any system that can model the world (and its relationship to the world) well enough to carry out this kind of thing. (Alphabet and FB love showing off their tech.)
I've spent an absurd amount of time obsessing over this stuff, so I'd have to get 20k words deep into philosophy to really explain my position.
There's no doubt in my mind, though, that the day will come when I can't cop out of the question the way I just did. The nearer that day draws, the more terrifying the question becomes.
I'm not terribly concerned. Based on what I've read regarding human brain complexity compared to what we can simulate, I'd put an optimistic timeline at a minimum of 100 years before we get something really resembling a general intelligence. The article I'm linking is not short, but worth the time in my opinion, and goes into far more detail than I can in a post here.
My question is really more a philosophical one.
I have a great sci-fi dystopian novel in my head around this.
A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.