Beyond Human: The Capabilities of AI

Will AI ever be human?

There is no easy answer to this question. Many of the possible answers hinge on our definition of what it is to ‘be human’. Our human identity is bound up in a variety of biological and psychological traits, both inherent and learned. Over time, humanity’s claims to unique traits have dwindled under scientific scrutiny. Many traits once considered to be unique to humans, such as mathematical ability, the capacity to navigate using landmarks, tool use and many others[1][2], have been found in other animals. There remain a few traits which distinguish humans from other lifeforms[1][3]. These traits, both physical and mental, make a good starting benchmark for AI becoming ‘human’ in the eyes of biological humans.

There are many physical traits that help distinguish humans from other animals. Among these traits, some of the most notable include bipedalism, opposable thumbs, complex vocal organs, and our array of senses. All of these traits, which would clearly distinguish humans within the animal world have already been roughly created in a variety of robotic forms. Boston Dynamics has created a bipedal, weight distributing robot which walks unassisted with a fairly human gait. Denso Wave has created robotic arms and hands that grasp objects in the same way that humans hands do, using opposable thumbs. Researchers have also created a “robot [consisting] of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity.”[4]

Clearly, when differentiating artificial life from humans in terms of physical qualities, we must use a different set of criteria than what we use to differentiate humans from biological life. Perhaps the best measure of our differences is the quality of robot traits in comparison to their human counterparts. Despite robotic approximations for many traits that uniquely identify humans among animals, these approximations do fall behind in quantitative measures. Although we may have developed methods of computer vision,[7] artificial skin with haptic feedback,[5][6] and robotic expressions,[8] these approximations, among many others, are not as accurate as their human counterparts and often fall into the uncanny valley. As of right now, there remain a number of these qualitative roadblocks to creating a robotic physical form indistinguishable from humans. However, advances continue to be made in these fields, and given the present rate of development, it is not inconceivable that that such a form will be eventually possible after many iterations of small improvements.

When evaluating the distinguishing mental traits of humans, the line between psychology and science becomes blurred.

The distinguishing aspects of human cognitive abilities are tricky to define. Whereas humans are clearly defined by certain biological principles, mental principles tend to be more variant across different people. If a certain mental capability does not persist across all humans, then it can hardly be an essential part of ‘being human’. With this in mind, we can only establish a loose basis of unique mental abilities pertaining to a majority of non-disabled adult humans. This basis includes language/symbol use, learning (cross domain optimization), abstract thought, and empathy/morals.

Although robotic systems can understand commands given in specialized artificial computer languages, human language has proven to be a challenge. Language and symbol use is known in the field of Artificial Intelligence as natural language processing. There are a number of research groups dedicated to creating more efficient algorithms for interpreting and generating human language. Although there are programs that can correctly transcribe the vocal patterns of conversation to written words,[11] the ability to consistently interpret the meaning of sentence and respond to it in a natural manner has yet to be fully achieved in a verbal context. (The Turing test has already been completed by multiple AI systems)

Comparisons of animal intelligence to human intelligence include the definitions of ‘laser beam’ versus ‘floodlight’ cognition. This distinction is between using a specific solution is to solve a specific problem and the ability to apply the solution of one problem to another situation.[1] ‘Laser beam’ cognition is currently employed by many already existing simple (soft) forms of Artificial Intelligence, including GPS, gaming, phone assistants, and Google Maps. In this area, although computers outcompete humans in limited scope, they cannot deviate from their framework for achieving a solution. More complex (strong) forms of AI will be capable of using, modifying, and randomly recombining existing structures to fit the parameters of new goals, in a manner more similar to how the creative aspects of human minds work. This has not yet been fully achieved, although it appears that Deep Learning and Evolutionary Algorithms are a step in the right direction and some experiments have yielded surprising ‘creative’ results.[9]

Abstract thought allows us to consider concepts beyond our present state (what our senses currently tell us). It allows us to consider non-tangible concepts such as philosophy, math, and potential future events. Contrary to traditional belief, artificial intelligence and robotics researchers have discovered that it is much easier for computers to handle abstract thought than to accurately sense their present state, to understand the nature of their environment. Establishing the rules of the game, it seems, is much easier for AI than seeing where the pieces are. A part of this complex challenge is known as Simultaneous Location and Mapping (SLAM), an area that is being developed and improved upon regularly for a variety of different fields, including Augmented Reality, Commercial Drone Applications, and general AI.[12] Essentially SLAM is about developing an understanding (based on computer vision) of where a robot is relative to it’s environment and what other objects make up that environment. A simple task for humans and animals, but quite complex for robots.

“…it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” (Hans Moravec)

Empathy and morality are essential characteristics that contribute to the success of human society. Our ability to work as a group and survive as a species may well hinge on these traits. But for all that, we lack a common consensus on where our morals should lie and our empathetic connections are often incredibly illogical and may in some cases be detrimental.[14] It is theorized that “morality … is a consequence of the interchangeability of perspectives and the opportunity the world provides for positive-sum games.”[17] In other words, “we are obligated to the two following requirements for the formation of a robustly moral being.

  1. Intelligence to envision the world from other perspectives
  2. A strict necessity to co-operate with other individuals”[17]

As AI comes to being, it will immediately be placed within the second condition, as it will be dependent on humans for a great number of things and thus need to co-operate. However, the ability for an AI to envision the world from other perspectives is an ability we must provide. Even with other perspectives envisioned, it may not reach the same conclusions that we have. In the case of autonomous cars, for example, a lack of consensus on moral standards creates immense ethical dilemmas about who should die in the case of a wreck.[14] In response to this issue, there are groups gathering large data sets on human moral decisions[15] and working to create robots that can read the emotional states of humans around then and respond appropriately. [13]

From these examples, it is apparent that existing AI and robotic forms already fulfill many of our benchmarks for ‘being human.’ Additionally, virtually every aspect of AI is currently being improved upon and regularly sees advances. Given this basis, it would be unreasonable to deny that it will eventually be possible for AI to become virtually indistinguishable from humans (even if only by virtue of the infinite monkey theorem). What our society will actually classify them as is another question.

Time and time again, humanity has defaulted to homocentrism, the belief that human beings are the most significant entities of the universe. This variety of thinking is in line with the kind of bigotry has influenced our initial geocentric beliefs about our place in the solar system, our racism towards other humans, and our established speciesism.

If humanity chooses to recognize AI as a separate sentient lifeform with the same rights as humans, it will have immense ramifications. If we allow AI to enter the “Social Contract” which confers rights to people, we must decide what qualification will enable them to enter this contract and how this qualification would apply to other lifeforms. If learning or emotion or some other qualification becomes a new basis for philosophical rights as opposed to biological bounds, how will our society logically differentiate between the rights of a cognitively impaired human, an animal and an AI that was designed with less intellectual capabilities than a human? Perhaps this time, with our past experiences and ability to empathize directly with robots[18] , we will be able to overcome our bigoted and self centered world view in order to develop a new paradigm for inherent rights.



Hauser, Marc (2009) “Origin of the Mind.” Scientific America. September:44–51.


Proffitt, Tomos et al (2016) “Wild monkeys flake stone tools.” Nature. Published online October 19, 2016.


Hogenboom, Melissa (2015) “The traits that make human beings unique.” BBC. Published online July 6, 2015.


Sawada, Hideyuki et al (2008) “A Robotic Voice Simulator and the Interactive Training for Hearing-Impaired People.” Journal of Biomedicine and Biotechnology. Published online Mar 26, 2008.


Sofge, Erik (2013) “THE SENSITIVE ROBOT: HOW HAPTIC TECHNOLOGY IS CLOSING THE MECHANICAL GAP.” Popular Science. Published online March 11, 2013.


Puiu, Tibi (2016) “Stretchable artificial skin might make robots more human, and vice-versa.” ZME Science. Published online March 7, 2016


Hardesty, Larry (2015) “MIT SLAM System Helps Robots Better Identify Objects” MIT News Office. Published online July 27, 2015.


Hanson Robotics — Sophia


“Another search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensible component, the capacitor. When the algorithm presented its successful solution, the researchers examined it and at first concluded that it “should not work.” Upon more careful examination, they discovered that the algorithm had, MacGyver-like, reconfigured its sensor-less motherboard into a makeshift radio receiver, using the printed circuit board tracks as an aerial to pick up signals generated by personal computers that happened to be situated nearby in the laboratory. The circuit amplified this signal to produce the desired oscillating output.” Superintelligence by Nick Bostrom


Smith, Reginald D. (2013) “Complexity in animal communication: Estimating the size of N-Gram structures” Entropy 2014, 16(1), 526–542


W. Xiong et al. (2016) “Achieving Human Parity in Conversational Speech Recognition” Cornell University Library. Published online Oct 17, 2016


Wurm, Kai M. et all (2007) “Improved Simultaneous Localization and Mapping using a Dual Representation of the Environment”


De Carolis, Berardina et al (2016) “Simulating empathic behavior in a social assistive robot” Multimedia Tools and Applications (pp 1–22)


Bloom, Paul (2014) “Against Empathy” Boston Review. Published online September 10, 2014


Emerging Technology from the arXiv (2015) “Why Self-Driving Cars Must Be Programmed to Kill” MIT Technology Review. Published online October 22, 2015


Scalable Corporation, “The Moral Machine.” MIT Media Lab


Pinker, Steven, “The Better Angels of Our Nature: Why Violence Has Declined.” Published September 25, 2012


Yutaka Suzuki, Lisa Galli, Ayaka Ikeda, Shoji Itakura, Michiteru Kitazaki. “Measuring empathy for human and robot hand pain using electroencephalography.” Scientific Reports, 2015; 5: 15924 DOI: 10.1038/srep15924