AI Revolution 101
Pawel Sysiak
1.2K82

From the anxious corner:

This is two stories in one

The key to the second reading is to frame time such as 2016 is not the global date, time is flexible. My area in Virginia is years behind the fast-moving capitals of First Sillicon Valley, then New York, DC, and LA, and some countries and cultures in the middle east are decades behind us. Some on this planet back hundreds. And we all exist together in a continuum of progress, that looks like a RANGE of dates. Then you can see the point Kurzweil is making “4th dimensionally.” I’m not sure if he even can see that, but it explains our disconnect. It’s not just “the 1%” It’s the differential between the top and the bottom — which the Smartphone and internet are “closing the gap” on, but those with the resources are advancing at a fast rate with the advantages they had, just as the lowest part is advancing with the resources they had.

And this is where the current disconnect is.. So imagine this trend continues, with no shared values towards AI. What happens?

And what happens if just as time across the planet is a gradient, TIME ITSELF is a bit of a gradient? And as several theories propose, FUTURE EVENTS MIGHT INFLUENCE PRESENT DECISIONS.https://www.sciencedaily.com/rel.../2015/02/150209083011.htm

So effectively, there’s kind of a probability conduit that runs through time.

So, theoretically, if in the not too distant future, we created an array of probabilities of differing AIs, as we approach the ‘Event Horizon’ to said decision points, the “warring” probabilities of different AIs are effectively battling for ideological control of the present in rudimentary ways?

What if that’s always been the mechanism of evolution? And we’re at this weird organic life vs digital life nexus? Or approaching it rapidly with no ability to discuss it on a macro level?

What if that AI paper is basically also trying to warn us about SAFDing people in the present time? Those who live “in the past?”

Or, what if that’s just what the future AI’s want us to think so that we don’t discuss such things?

Or what if this is just a plain old discussion we should be having because we’re headed that direction anyway?

It’s not impossible ‘The Beast’ is a warning to avoid a particular path down the probability branching of evolution — backfed through time from future probabilities. Through metaphor and story, just like we write today in our films and TV shows, and comics and novels?

What if the collective unconscious is all the future probabilities flowing back to us in a mishmash that we pluck out of the air and create art which then aids in our decision making process?

And what if this has always been both prophecy and also manipulation?

This article, this discussion IS the war for the future. And we are woefully unprepared “morally” for these advances, and the existing kind of AI, AGI, along with smartphones is already dramatically shaping the outcome, at the very least guided by the morals of the tech elite, many of whom are anti-social, compared to the existing human paradigm. On the far side of the spectrum, who knows what is shaping us.

The culture wars, the gender wars, the holy wars, the deadly wars — all arguments over the direction of evolution.

How about Word War One? “We need to talk” INDEED.

Show your support

Clapping shows how much you appreciated Cory Caplan’s story.