The Future of Cities Within the I and 0

Ali Al-Sammarraie
MIT Tech and the City
4 min readApr 30, 2018
Illustration Edited by Ali Al-Sammarraie

“Ideology does not grow at the same pace as Technology” was a concluding statement of Liam Young from Tomorrow’s Thoughts Today in his Friday’s cinematic presentation. He played a movie and narrated a dystopian future path of AI. This was followed by Wendell Wallach’s cautious presentation of Artificial Intelligence and its misuse, explaining the reasons why complex systems fail, what are potential Dual and Intentional misuses in AI, and how the increasing autonomous systems threaten to undermine the foundational principle that a human agent is responsible for the harms of technology.

The widely known trolley problem demonstrates morality as a gradient of values, as opposed to a binary situation. The machines in that scenario make mistakes similar to humans’, but what is moral (and what is not) is a very dynamic set of values, in my mind, that Liam and our seminar seemed to disregard. There might have been discussions on morality elsewhere that I am not aware of, I will briefly state my concern:

In Socrates’ commentary on “Man Know Thyself” addressing self-awareness, he explained that we are the culmination of our experiences; they are ever evolving, never the same, and tied to all our other experiences as opposed to a mere birthplace. I understand morality from that lens — that it is never ‘finished’, our values are continuously changing and readjusting what is ‘right’ and ‘wrong’ — as such, we can never be held static in understanding morality.
Our moderator for the seminar described two ways of encoding morality into machines in discourse: Close-ended machine moral code, and an Open-ended machine moral code that is allowed to grow. The Close-ended moral code addresses commonalities that people tend to agree on across cultures, religions, and race using Isaac Asimov’s rules of Robots as an example(1). In the lens of Socrates, I do not see this as a working model, having a machine stuck in a static moral code while humans continuously morph their own sets a gap in perceiving morality and what is right/wrong. If the objective is to continuously update the machine’s moral codes, why not let it automate the process itself?

Illustration Edited by Ali Al-Sammarraie

Which leads me to the second part. Open-ended moral coding is a two-edged sword, especially if we are talking about Artificial Intelligence and Machine Learning having AI grow on itself (Elon Musk spoke about his fears with AI in this light). We discussed AI’s growth is at a faster rate than our own biology’s evolution pace, allowing AIs to progress their intelligence with the potential of overriding their moral standards. Here lies the real danger for mankind to me. The two, humans and machines, are never the same from an intellectual standpoint. One is designed to be linear, and as such understands the world only in terms of I and O (regardless of how complex it gets) and the other is a complex biological organism that thinks especially in terms of those between the I and the O (of the machine). Machines do not have the understanding of contextual evolution of biology, and consequently, the contextual perception of Good that Grey, Oughtred and Vandervell speak about.

From a utilitarian standpoint going back to Liam’s presentation, where machines do our chores, provide our pleasure — virtually- on demand, make our desires so accessible yet unreal, made me think that, indeed, we are trying to use AI to do our daily repetitive tasks, neglecting there is a distinct line where our ‘Humanity’ comes to play. If you were wondering, I am not talking about Singularity, I am talking about the human nature of hard work and reward, human connection to feel real pleasure or the sense of responsibility that machines are designed to strip away.

I, as a human, am afraid of what I do not understand — Black Holes, our star turning into a Giant swallowing everything on its way, or even the concept of God and “Heaven and Hell”. When thinking about AI and cities, especially, I realized it is perhaps emotions and conscience that are the signature traits which are the “most sacred of all property”.

Illustration Edited by Ali Al-Sammarraie

(1)- Isaac Asimov’s “Three Laws of Robotics”
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

--

--

Ali Al-Sammarraie
MIT Tech and the City

Urban Designer, Futurist, Astrophysics enthusiast– MIT alumnus, DC Council Analyst, The World Bank, and Harvard Urban Mobility Consultant