Time for Immortals

Philosophia
Sep 7, 2018 · 4 min read

I recently read the book ‘Superintelligence’ by Nick Bostrom, a really difficult book to read, as far as I’m concerned. But definitely a FASCINATING one, as I see this book a really amazing, research driven, thought experiment to help one know who we, as a human being, are and where we’re headed. It certainly touches the subject, in a philosophical way, will ‘us’ become ‘Super-human’ or in Friedrich Nietzsche’s words ‘Ubermensch’ or will ‘us’ transcend into something completely different. Different in every tangible or intangible aspect.

Specially chapter 12 and 13 that explore how can an AI be loaded with Value and what Values respectively gives us, a nice guide as to how can we apply the same principles while we acquire ‘values’ in the world today, without biases and prejudices.

Although it is not yet clear as to how can we solve the value load problem in AI, but we do have multiple approaches and ideas to start this exploration. It’s fascinating to see how Nick Bostrom takes the ideas how human beings acquire their values during their lifetime, and how thus one can instill values in an AI. It’s also fascinating to see a comparison drawn in Designing Institute or System in AI words, with how Human societies have been structured to keep a tap from being corrupted. We know that several socio-political philosophers and thinkers have spent their lives on just this subject on how as Humans can we built our system/ societies to make sure we can all live life with well-being and minimal misery and know that this is an ongoing ‘evolutionary’ discussion to help structure societies with the environment we live in.

I do see a clear parallel with ‘Institute Design’ for AI too, specially in the light of how AI’s capability will change the landscape of the AI design system (similar to how human environment changes).

However, I do feel that before we make any of such comparisons of an AI system with the Human led societies it is important to dig deeper as to what drives our moral values in the first place. I strongly believe after much introspection that MOST of human beings motivations and moral values stem from the fact that we are MORTALS. This is not obvious to most people, but let me break it down to help get to the point.

Most of our actions are driven by the fact that we either value it or we don’t. The moment we are born, we spend time learning new things including walking, talking etc. As a child we develop value for our mother or father or our immediate family, as they give us value. These values shape us into who we are and how we conduct ourselves for the rest of our life. So acquiring certain values and acting on them become a part of an ongoing life process. As a child or teenager, we spend most of our time reading, going to school or learning a new thing or subject (academically or non- academically). This thing is again something mostly what we value. And in early adulthood we tend to focus a bit more on what we think will help us live a life i.e. earn a bread and a shelter. The value as an adult for most people may not be the subject or learning of the subject itself, but we value the fact- ‘to earn’ is important. The point I’m trying to draw here is that during our entire life, we DO what we VALUE, knowingly or unknowingly. We love someone because we value, we hate someone perhaps because we value our ego, we spend time on fb because we value our narcissism or the sheer amount of time we had already spent in uploading the lifetime of photos and videos. In other words, we value where we spend our TIME and we give TIME to something we value. In fact if you think about how we come to even value our TIME is because that is the most REAL thing we possess. And this thing, TIME, is limited for us mortals.

Think about it, we read or learn so that we can minimize the time and of course effort to go through the process the writer has gone through for several years, we spend time with someone we love so that we can learn more about that person, we love our children on the other hand as they are the only entities we spend so much time on (and of course from the fact that they are biological copies of ourselves- guess what embedded in us over evolutionary ‘TIME’-scale) and so on. TIME comes out to be the fundamental factor basis which we, humans, build our value system.

Now think about a life where you were magically endowed with immortality or in other words, UNLIMITED TIME. I strongly feel that would change entirely the Value system that we have today. In the book Homo-Deus, Yuval Noah Harari does a meditation around Amortals (not immortals) as he explains that we may enhance ourselves but may still die because of our biologically limited bodies. But now imagine, if we end up becoming non-biological beings without an expiration date, transcending into superintelligent hominids, who do not have a value system at all or perhaps have a whole new spectrum of value system that may not be based on some Fundamental Convergent Values of today. Nick Bostrom does a great work on the Idea of how future AI- irrespective of the fact that they are de-novo or human emulations, would have some convergent instrumental goals. If this future AI has unlimited time, I strongly feel that these convergent instrumental goals will be coupled with some value system that are unfathomable for us humans, primarily that they may be bound by neither SPACE nor TIME.