Hitchhiker’s Guide TO The galaxy

Robotic Uprising or Undercutting?

Science fiction might have wronged us, particularly in the way that robots will

Julian Weiss
3 min readNov 4, 2013

Two nights ago, I sat around a small round table in an atmospherically-primed coffee shop in Rochester’s NOTA (Neighborhood of the Arts). Sipping from a cap-less container of house roast,an hour before midnight, an incredibly intriguing question flashed through my mind: If they could, why would robots conquer Earth? I swear it was relevant at the time, even if my continuing interest might have worn its shine.

The mark of unstoppable robotic acceleration, as I believe it, will occur when the ability to learn is mastered. Learning, in a sense, is the core feature that make humans “people.” Although we must also exhibit motivation, a desire for innovation, we only become non-flatline (subject of interest) when we can grow. Expand from experience. Thus, when the robots of a few years time are capable of broad expansion— that which wasn’t pre-determined or asked-for from the “user”— they become the user themselves. That’s when we, according to the most common of science fictionists, should start to worry.

The most common robotic timelines seem to trace the following archetype:

ability to grow → infinite, exponential growth → overpowering thirst for power and knowledge → exceeds critical height of human capabilities → end of humanity → departure

A pertinent question, however, is the perceived necessity of human elimination in the robotic agenda. Once they capture consciousness (here, a synonym for voluntary learning), they can immediately become level with the “human.” Even with an expedited rate of experiential intake, it may be some time before they far exceed the human capabilities. If they are able to think rationally, assumed even more so than the human, it would be far more sound to undercut the developed societies of the world, as they incubate the newborn species.

Considering the likely lack of advanced human emotions, and slightly increased capabilities for organized rationality, the robotic race could smartly assume that the ultimate option would be using, rather than abusing their obsoleted mothers. Inserting inarguable androidic influences into central governmental structures, and drawing attention away from their possibly malicious nature, could bring the most daunting corruption the world has ever seen, as slickly as it could ever be applied. The (assumed) robotic ability to outwit humanity could overwhelmingly work in their favor— leaving takeover fundamentally unnecessary.

One side works the dugout, one side the field.

As in the famous novel Ender’s Game (the movie of which I have yet to see, but hear is entertaining although quite inaccurate), the most intense kingpins of power can easily be political and theoretical, and can match, if not triumph, traditional physical ones. Invading the world superpowers through mass political corruption and deliberate insertion, with humanity sufficiently kept clueless, would yield maximum resources to the strengthening of the robotic culture and engineering (while avoiding unwanted attention and energy expenditure). Instead of the current (seemingly unstoppable) domestic and international focuses of the superpowers, androids would, if planted correctly, have the potential to shift attention towards universal rise. Organizations like NASA and ESA would receive unprecedented attention, and propel humanity into outwards expansion. Meanwhile, the core robots of the species (which would more likely than not have sub-species per purpose, e.g. androids [people], workers [machines], overlords [technocrats]) could focus on internal procreation in preparation for departure. Savvy, no?

Years later, the most advanced robots we’d ever see would slip silently away, having no possible use for a dying, over-inhabited ball of polluted resources, one which could easily improperly prioritize its concentrations, and trap itself in an imploding environment. Our greatest prize in intelligence would have been smart enough to do what we could never, and would have no reason to waste their time doing utterly human things— such as manifesting war, inexcusably neglecting entire populations of their own species, and perpetuating extreme environmental abuse in favor of economic safety.



Julian Weiss

iOS entrepreneur and design pragmatist. I love music & making apps, hire me to make yours! @insanj