Machine Learning Has Entered the Chat:

“If I cannot inspire love, I will cause fear.”

Jeff Berg
5 min readMar 4, 2020
Involuntary Dilation of the Iris, by the author, available as a print

Science fiction has put millennia of philosophical intellection to test as grand thought experiments. These fiction works surmise human reaction to the advance of technology in often dystopian settings as we gaze into the uncanny valley of the artificial.

In the movie Bladerunner, Deckard is the hero assassin assigned to kill escaped artificial lifeforms. He promptly meets Rachel, the personal assistant to Tyrell. Tyrell runs Tyrell corporation, the maker of the artificial lifeforms. Rachel sizes up Deckard and says, “It seems you feel our work is not a benefit to the public” instantly placing the ambivalent relationship humans have with the world and things we create to the forefront of the narrative.

Hal, the computer in 2001: A Space Odyssey says plainly, “I know I’ve made some poor decisions recently, but I can give you my complete assurance that my work will be back to normal.” And “I want to help you.” Illustrating the tense trust relationship between humans and the artificial.

This tension of trust, the seeming speed at which it is easily lost, is illustrated even as far back as Shelley, in Frankenstein, who wrote the artificial being as a monster in a more self-realizing tone saying, “If I cannot inspire love, I will cause fear.”

Asimov famously coined the rules for robots which would assail our fears and maintain ethical control in the hands of humans. In I Robot, the rules appeared as a complete list:

· First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

· Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

· Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In today’s technology ecosystem of social media, robotic activities have taken less physical form, but have thrived. To understand the nature of this development, robotics requires a broader definition to include ethics bound artificial social participants with no physical form.

These social machines, virtual as they are, are even described by consumers as “bots,” revealing the robot nature of their existence. These bots interact with us in the built environment and the virtual world of social media. We are nudged by and directed by the activity of these actors knowingly or not. There is little in the built environment and our social lives that is not enhanced by data analytics and machine learning, from medical care, self-driving cars, to dating apps and advertisements placed in front of us. It doesn’t take long in a conversation about these experiences for someone to exclaim “that was creepy though” for something as simple as an advertisement. Yet we embrace the ease at which algorithms present other results, such as a list of people who are most likely to get a connecting swipe on dating apps. It turns out, we are, as in the science fiction examples, in constant tension with the machine learning and artificial intelligence around us.

Asimov’s three rules are filled with challenges and scenarios. It is the first rule that we have already encountered in depth as we grapple with a sort of neo-enlightenment (self-described as “woke”-ness) of social network enabled social activism and the resulting legally bound awareness of intersectional vulnerabilities. The definition of harm itself has been broadened as we understand better what harms people in ways that may not be visible yet just as damaging.

Within this nuance and complexity of ethics evolution, a paradox develops in the introduction of ethics into machine learning driven actors as more and more algorithms and data have been used and proposed in the digital transformation of civic responsibility and governance. The digital transformation of civic responsibility and the presence of these cybernetic actors made of algorithms and spreadsheets increasingly insulates and limits the liability of government by allowing administrators to declare proof of their tech savviness and data driven policy making even as the algorithms and data threaten the welfare of our most at risk citizens. The paradox: In the attempt, even with good intentions, to shift ethical control to robots, we cannot avoid breaking Asimov’s first rule.

It is prudent then to use machine learning driven services as guidance, as probability engines rather than accept the outputs without question. This is especially true in governance, the “smart city,” the design and architecture of the built environment, and the many components of the complex machine of society.

I Ching Sample Page, Song Dynasty Wikipedia

It’s become common knowledge that computers run on binary code. Less known is that it’s a form of arithmetic invented by Gottfried Leibniz in 1689 who based his work on the I Ching from 9 BC. His work was utilized in the on/off capability of vacuum tubes, then the transistors that replaced them, and then the silicon that miniaturized them. Most people know this as the term binary, those strings of ones and zeroes that to most inexplicably exist at the core of all computing hidden under and safely fossilized away in the substratum underlying everything, as available as any other constant in the universe.

Watson on Jeopardy Wikipedia

The speed and availability of cloud architecture machines in limitless, elastic, serverless computing with machine learning algorithms available as a service has provided a new notation to the ones and zeroes we’re so used to. This was witnessed by popular culture with the appearance of IBM’s Watson on Jeopardy. The machine learning shown hinted to the audience the probability results of machine learning to choose statistically chosen replies that were maybe the answers, but not guaranteed. From that point it was important to think of computing not just as one and zero, but one, zero and maybe. The maybe exists as a sort of floating point between the zero and the one, like a superposition where the machine is both committed and not committed at the same time to provide a computed result.

Heavy ethical burdens befall humans especially in the built environment and the civic functions that keep it running. Pointing at algorithms and data when citizens and users ask about the consequences they must face after those algorithms run is a displacement of those burdens, not a solution.

In an ethical and functioning society, someone must stand there and answer to those citizens negatively affected. This responsibility will fall to those who deployed the algorithms and data hoping the ethical dilemma was magically solved and fossilized away in a substratum of ones, zeros and maybes.

This article is a follow up to my article titled Digital Government and Data Theater. Go there to explore more and find a lengthy recommended exploration list!

--

--

Jeff Berg

Webby Award winning designer, urbanist, coder, participating in and observing the digital transformation of the built environment.