AI Dystopia in balance, and context

Before we assume the worst, let’s define the hype

The future will carry plenty from the past. Bladerunner fear-as-rain, maintained neon, the stubborn beauty and bleakness of old ways, immortal things, and Luddite philosophy.

Michael K. Spencer requested an analysis of his dystopian predictions of an AI controlled globe and future.

Let’s start with the assumptions and assertions:

…there’s increasing evidence [that] AI:
  1. Accelerates wealth inequality

Yes, just as software has in the Internet and mobile eras. If you’re not struggling to survive, you can do more with less. You don't need to own a factory, just gather some coders and laptops around a kitchen table. Then you capture that advantage first, use old-new money to hyper-charge new monopoly plays and claim “disruption” — even if it means delusions of “revolutionizing urban transportation” by putting neo-hipsters on motorized scooters.

by Viktor Forgacs

Interconnection technology also opens up access. The mere fact that we are able to consider a “global AI” phenomena speaks to the power of the Internet to harness our collective human intelligence — the greatest intelligence we have ever known. Nick Bostrom, the king of well-researched and technical AI dystopia, agrees it rivals any other form of artificial intelligence theorized to be created.

It’s proven, it’s a tool we don’t t know how to use or control properly, how to compliment each others’ individual intelligences — to add them up as a sum of a cooperative, hive-brain power instead of different sections battling each other like an auto-immune disease.

Any tool has potential to be used for good or for evil, which leads us to the claim that AI…

2. Has led to the weaponization of social platforms

Undeniable. QuHarrison Terry shared his philosophy on this inevitable stage of the technology cycle. History makes it hard to deny. When the military, or an authoritarian, or any “bad actor” sets an intention of conflict and aggression — it usually ends up with some accepted technology turning into a weapon.

But let’s look at the possibilities to regain peaceful use of social platforms. It’s not the same as something designed only as a weapon, only to harm, kill, destroy. We can use a nailgun to build a house, or shoot cans, or shoot nails at and into people. These platforms too are tools, capable of what we intend for them -and the “we” here is who carries the unsecured toolbox.

by Willian Justen de Vasconcellos

Think nuclear weapons — how much money, energy, fear, close calls happened because we invested so heavily in these un-tools. Mutually Assured Destruction may have kept some greater “peace.” There may have been great discoveries in the race for a bigger bang to scare and intimidate the other — though it came at the cost of entire generations’ faith in collective, cooperative intelligence.

3. Leads to ethical problems in the control of groups of people

Inherent in labeling and categorizing people, racism and discrimination show the lack of dialogue and inclusive thought in setting the intention for AI algorithms.

What messaging, what advertisements, what experience — these questions are usually answered by the business model, which traces back to assumptions about “what people want” based on “what the market will bear.” The attention economy captures our cognition, encourages addictive OCD dependence on platforms and quantified self-worth. This is the modern tribalism — and the tribes and ambiguous, disloyal, morphing, and shallow. It’s not AI, it’s the goal of an entire economic system. AI just accelerates and retains the capture. AIs become individualized retention and appeasement engines, following us around, guiding us back to serve the metrics.

4. Is a tool that will be abused by authoritarian centralization

Authoritarian Centralization …

Sounds like any of the centralized platforms we use and are controlled by. Government power is declining. The new centralized authoritarians are tech companies. They have more influence over our day to day lives.

Now when traditional centralized authorities use centralized authoritarian platforms to control — you’re talking meta-manipulation and collusion. Luckily (?) most of Silicon Valley is libertarian or ‘liberaltarian’ or cling to some form of anti-government, idealized “leave us alone so we can make sci-fi into reality” philosophy.

China has a captive audience, and the Party holds enough power (or the illusion of power) to collaborate with and regulate, pick which tech companies will do what. State capitalism and AI — that’s something we’ve never seen. The Soviets fell apart just when computation and the Internet was about to get good. The Chinese are making their own, improved version of the Internet and fully embracing AI and open IP at whatever cost.

5. Will impact a decline of trust in governments and tech companies

by Randy Colas

The decline of trust in government is a spillover effect from attention economy manipulation. Distraction and manufactured reality are familiar to politics, and now the polarized stagnation has a new strategy tank to pull from. Politicians can overcome by being real, exposing their authentic reactions, having nothing to hide — and connecting directly to citizens with the same tech that could cause distrust.

Tech companies need to get out of the way of human connection. Their platforms, in the continuing (if limping) spirit of the Internet, enable a connection, nothing more. Business models based around distraction from this philosophy corrupt the experience and foster resentment and distrust in favor of short-term increases in conversion and “engagement.”


Let’s talk about the reality of AI’s “revolution” — not just that there’s hope or much to be determined, but that we’re also looking at a potential AI winter. For decades, AI has gone through hype and fail cycles. It’s not really living up to what’s been promised — or feared.

by Luca Iaconelli

We’re really just pumping more data through faster, more dense arrays of servers. There’s more throughput, more investment, so we can claim more results. AI has nothing to do with the root problems, the dysfunctions and colder social wars of humanity. They’ve been building for a while.

AI is burning the fires brighter, enough to see the ugliness of the intentions and models behind them. AI’s are faceless extensions of bubbled technologists. Congressional committees who don’t even understand how they work cannot fix this.

by Miguel Bruna

Users can challenge the system, demand a different experience. It just takes a focused, stoic effort in a maelstrom of anger and flashing distraction. Easy, right? Maybe we need our own AI’s to manage our exposure, our notifications — to fake cognitive capture while we think and organize and rebel freely.