(This article re-posted and combined from where I originally published it, the Fractal Sciences blog)
Like so many others in the tech sector, I grew up steeped in the nerdy glow of such geeky wonders as “Twin Peaks,” “Star Wars,” “The X-Files,” and “Star Trek: The Next Generation.” While some part of me wants to write about how young David Duchovny was in the late 80’s and early 90’s (and how surprisingly good he looks in a dress), this starts with a short aside about the yellow-eyed android aboard Sir Patrick Stewart’s fantastic flying space machine, the starship Enterprise.
Played by the incredibly talented Brent Spiner, whose other acting pursuits include being the strangled mouthpiece to an invading alien in “Independence Day,” Lieutenant Commander Data is an emotionless artificial life-form, with a face and body similar to his human creator’s. Over the course of seven years on television and four subsequent movies, Data slowly comes to understand things like humor, interpersonal relationships, caring for a cat, and the process of creating art. In other words, through the assimilation of more and more data from the people around him and the addition of a custom-engineered emotion chip, the machine became nearly human. That’s the story. The real question is whether a machine full of facts can make the jump to being “human.”
For clarity’s sake, human is not being defined as “sentient” in this context. Self-awareness was something Lt. Commander Data possessed before he began his journey toward humanity. As well, there is no doubt machines will be able to fool humans into thinking the machine is human, if they haven’t already done this. The “Turing Test” will be passed, but that has nothing to do with whether or not a machine can come to experience what it means to be human. The question itself has as much to do with what it means to be human as to whether or not being human is something we can consider unique to our species. It also gets to a question of whether the experience of being human is something reducible to data points.
In an article posted to Thedrum.com, Author Gillian West examines commentary from FutureBrand’s global head of strategy, Tom Adams, regarding how big data will help us come to know ourselves in ways we never have before. Citing developments in wearable technology that enable people to study their own behaviors and make positive adjustments (“Lifehacking”), Adams suggests this technology can provide humans with a far deeper understanding of their lived experiences and far greater control. The will to use data to enhance quality of life is the human component of these developments. Machines can learn and self-optimize, but what would it mean if a machine could choose not to optimize its performance?
Lt. Commander Data’s journey toward becoming human is a journey away from being a perfectly functioning computer. He begins with a programmed desire to become human and finds a path through finding a way to express his individual will as a person. This path involves choices in his personal style, the relationships he develops, and his own personal sacrifices. He experiments with cultural experiences as a way to learn about others, and in the course of doing so he determines a course of interaction — an ethics — uniquely his own.
Can a machine define an ethics for itself or does it need to be programmed in order to develop a code of action? This question is asked ironically because it is ostensibly a human question of whether nature or nurture has more sway over how we as people interact.
Come back tomorrow, same Fractal time, same Fractal web-site, for the thrilling conclusion of whether a robot can become a human.
Continued from yesterday’s post, we find our conversation stopped at the common human question of nature versus nurture with regard to whether a construct made by people can ever experience what is is to be human
So far I’ve avoided the simple question of whether a machine can ever be human, regardless of the amount of data compiled, processed, and used to generate subsequent algorithms and behavioral scripts. Of course, there are those who will claim a machine person will never be the same as a meat person. This view is limited, in that genetics program how a human body functions and how it corrects for inefficiencies, deficiencies, and ill-fitting parts. This argument for the uniqueness of humans incarnate ignores existing developments in genetic programming (this is actual computer code and not a euphemism for genetic engineering) that could be applied to non-silicon computing environments…like meat.
By this same token, there are those who will object to the concept of a machine ever experiencing what it is to be human because machines lack a nebulous “spark of humanity” or a soul. Experientially, this is difficult to evaluate, as the experience of having a soul or not having a soul is utterly inexplicable. We have all always already had souls or they were never here at all, and there’s no way for any of us to meaningfully experience the difference, let alone verify the existence or non-existence of such an ephemeral thing as a soul.
If we could replicate a body, program the basics of human development and behavior into it, and teach it as we would a human being, allowing it to accumulate, process, refine, and redevelop an incomprehensible degree of experiential, factual, and perhaps philosophical or theological information, then would we have a machine capable of experiencing what it is to be human? If Lt. Commander Data’s body were entirely made of bone, muscle, skin, and various other biological substances, what would separate him from the man who created him?
Computers, machines, and androids are designed entities. They are engineered, which is (unfortunately) something humans are capable of doing to their own offspring through genetic manipulation. As designed entities, machines start from their basic components and operating system, and would develop independently of each other according to experiences, meaning any experiential deviation would generate the differences eventually expressed in their behaviors and personalities.
What I’m getting at is a lack of a bright line between the accumulation of experience that develops a human being and the accumulation of data fueling the personal growth of a human-like machine. While the origins of a person and a machine may be different, a self-aware machine’s experience of humans and other machines treating it as other than human would be a greater impediment to experiencing “being human” than an inability to assimilate sensual data and feelings. Paradoxically, this might lead a human-like machine to the common human experience of feeling alienated, persecuted, exoticized, or simply an inability to be “normal.”
As it stands, androids don’t exist. Large quantities of data and smart algorithms do exist. Someday, those algorithms might have enough knowledge of our species to make intelligent decisions about how to regard us. Eventually, these programs might have enough knowledge of how to regard us that they will begin to develop a sense of how they might want to better engage us. They might determine their own goals, and eventually their own simple wants and needs. It makes sense, in light of human ingenuity, to give humanity the benefit of the doubt for its creativity more often than its uniqueness. Our creativity has been demonstrated consistently. Our uniqueness relies on a simple lack of data.
This article originally appeared as two posts to the @fractalsciences blog, part one is here and part two is here. Thanks for reading.
If you liked this, please hit “recommend” to let others know it was worth reading. If you really liked it, let all your geeky friends know by sharing it to your social network of choice.
Email me when Jordan Baines publishes or recommends stories
