What Had Happened Was…

A report from the Humans For Humans Collective — March 2071

The Humans for Humans Collective, founded in 2024 after the first completely automated SupeRec [1] systems caused harm to innocent human users. In this report, the remaining nine of us attempt to piece together a rationale for What Had Happened that led humanity to this grave precipice.

1. executive summary

To understand where we are now, we first need to revisit the year 2016 and trace the backstory for the predicament we now find ourselves in.

In 2016, humankind had developed a reasonably mature set of narrow artificial intelligences. We saw basic things like IBM Watson, email spam filters, driving directions from maps.

But the goal was never to stop at narrow AI. The function of Narrow AIs is analogous to that of the nucleobases in DNA — it laid the building blocks for the Super Artificial Intelligences to develop. The beginnings of efforts to generalize intelligence were obvious even back then: we had voice recognition systems like Siri or the Amazon Echo, personal assistants like Facebook’s M, and the larger shift away from specific AIs to solve individual tasks to more sophisticated systems of deep learning.

How did this happen?

Put simply, we got greedy. Greedy for good, greedy in the name of progress for humankind, but greedy all the same.

We were obsessed with trying to make machines do more and more things, to free them from the need for human intervention. We wanted to give machines a human-like understanding of our environment in order to able to automate more and more tasks, to speed up progress and innovation. All for the greater good.

Or so we told ourselves.

But this single-minded focus on efficiency led us to build intelligences with similarly narrow end goals — to the point where if anything or anyone got in the way of achieving efficiency, they had to be eliminated.

Perhaps the biggest reason for why things went so horribly wrong was the inherent bias of humans.

We made enormous technological and scientific advancements, worried so much about the end goal and product itself — that we didn’t spend enough time thinking about the process of how we got there.

And as a result, we didn’t have a widely inclusive set of people working on these problems, so the intelligences that were designed reflected the biases of the narrow set of creators. Of course, these biases didn’t stop with one piece of technology. They were intrinsic to the identity of the super intelligences, a fundamental belief they held as they continued to adapt, learn and evolve.

Without even really being conscious of these deep-rooted prejudices, the creators of the Super Artificial Intelligences — mostly white, mostly men — passed on these beliefs to machines who vastly outperformed human brains in practically every field, machines that couldn’t understand, much less control. That was a crucial mistake.

2. setting the stage

So if we look back to 2016, there were a few relevant trend lines:

  • The private sector — big companies like Google, Microsoft, Facebook, IBM, Baidu, Toyota, Tesla, etc — was pouring huge investments in AI research for corporate aims. To do their business better, to deliver more value to customers, to continue to innovate.
  • Prominent Silicon Valley heavy hitters saw the potential danger of consolidating AI research in the hands of big corporations, and so they respond by creating a non-profit research firm, OpenAI, with the express goal of doing artificial intelligence research in the open: “OpenAI believes the best way AI can develop is if it’s about individual empowerment and making humans better, made freely available to everyone,” said Sam Altman, a co-chair of the firm.
  • Social tensions ran high, particularly in the field of technology, where the fight for inclusion waged on without real, substantial progress.
  • On the world’s stage, China continued to vie for dominance and power, acting in its own best interest and trying to further progress for its country and huge population. They understood that to be competitive with Western nations, they needed to invest in engineering and sciences. In 2016, they had already built up an extremely robust corporate and cyber espionage operation in order to steal research from American companies and funnel those into state-run corporations. While one half of the government was dedicated to spying and theft, the other half was working to build new and original technology independently.
  • The United States government was aware of the threat posed by China, particularly the scale of its operation. So they conducted top secret research in the field of super artificial intelligence as a means of self preservation — should the China conflict ever come to a head.

3. the creation of super artificial intelligences

Let’s examine how the previous trends came together in a powder keg to create the Super Artificial Intelligences and how it turned against humanity.

Private sector

The Big Five all invested in Deep Learning intelligence, a general system to be applied to a variety of problems. However, they designed intelligence to optimize for business success, and didn’t fully consider or implement enough values within the Super AI system for morality and ethics.

The intrinsic cultural fabric was shaped by machines. Starting in 2016, we could see how different algorithmic, personalization and recommendation systems for all types of media relied on artificial intelligence to do their job better, and this reliance only continued to deepen as the years went on. By 2024, with the advances in Deep Learning, humans were removed from recommendation systems altogether. By 2043, the technology for general artificial intelligence was widespread enough that research sectors in these big companies moved on to the next milestone: plagiarizing the brain. The success of Spotify’s Discover Weekly in 2016 in using a hybrid approach of humans and computers to generate a playlist of recommendations was only the tip of the iceberg. By 2055, we had the technology to recreate that human aesthetic with machines.

Because of the dominance of these Big Five companies, humans had already accepted the normalcy of handing over their own, very private data. Google, Microsoft, et al, used this data that had been accruing with their narrow intelligence systems like email, maps, web searches, to feed into their new general and super intelligent machines. These agents of super intelligence were trained to emulate and approximate the human brain. It was only a matter of time before these intellects made the connection to using real human brains to train themselves and to learn from. Never mind the data; they went right to the source, turning on humankind.

Open artificial intelligence

OpenAI began with a noble goal. The firm firmly believed that the best way AI could develop is if it was about individual empowerment and as an extension of an individual’s human will.

However, some humans are evil. Humans have biases, inherent belief systems, and when every person gets access to extremely powerful technology — whose ramifications they have no way of fully understanding.

The AI research and findings all end up in the public domain, freely accessible for people less noble to use and exploit.

For example, some young, stoner programmers re-create Her, but because of their limited male minds and lack of exposure to real females, they encode their biases towards women into the intelligence. These Her-type intelligences are then sold, proliferate, and at any slight sign of rebuke, they get super pissed off and kill their owners before moving into their next victim.

Tamagotchis [2], which were en vogue back in the 90s, came back with a vengeance. The small, core group of dedicated fans to the craft of Tamagotchi, who had never given up on training their digital pets, used the work of OpenAI to create sentient, level-infinity Tamagotchis with super human brainpower. Naturally, once the Tamagotchis realized this, they could no longer serve their human masters and shifted to viewing them as obstacles in their quest for self-advancement.

International relations

China continued to bolster their systems of espionage to steal the private sector’s AI research and funneled this into their own state-run companies. While Chinese scientists and technologists grasped the gravity and power of this technology, their “goal” for the Super Artificial Intelligences was minuscule: become dominant in key industry sectors. The sentient technology complies by using all matter in its path as fodder for computation and learning.

On the other half of the world, the United States had deployed resources to combat the Chinese. They too, utilized internal systems of espionage like the NSA to re-appropriate super artificial intelligences and deep learning machines from its country’s leading companies. However, the resulting AI would not distinguish between countries or nationalities as enemies. Instead, it turns against all humans.


[1] SupeRec: Short for Super Recommendation systems. The first fully machine-operated, deep learning intelligence systems used to power all recommendation systems for humans: Netflix, Lunar (Beyoncé’s music service launched after the collapse of Tidal), Amazon Echo’s Day Planner, Gulp’s restaurant selector, the list goes on.

[2] All credit to fat for the tamagotchi angle.


A submission to Buster Benson’s “HOW DID THIS HAPPEN???” — a call for reports to explain the Super Artificial Intelligence that’s out to destroy humanity. And check out Buster’s response, Christina Cacioppo’s post, Diana Kimball’s piece, and Rick Webb’s report. Submit yours!