Artificial Intelligence: On A Scale Of Zero To Human, Where Are We Right Now?

I(From left) Spotify’s Discover Weekly, curated playlist based on my music preferences; Nest, smart thermostat that learns users’ lifestyle; Google Services, suite of integrated services that learns different aspects of its users

In our lifetime, we’ve already seen computers rapidly move along the spectrum of intelligence as they start to develop more facets of human intelligence. From computing basic function from data inputs from humans, computers are now able to recognise patterns, draw conclusions, and even make predictions. It’s not hard to see the gap between human and machine intelligence narrow even further in the future. For me, the measure of intelligence is based on three criteria: (1) Knowledge acquisition; (2) knowledge application; and (3) Emotional intelligence.

Theory of Knowledge; from Ian Barton’s blog

The most rudimentary form of intelligence begins with the ability to acquire knowledge. It brings to mind some ideas we discussed during ‘Theory of Knowledge’, a subject I took in high school. This facet of intelligence concerns itself with understanding the world around us through sense perception, reason, language and emotion. Sense perception allows us to gather first-hand information. Through logic and reasoning we gain insights based on inferences. Language is our means of communication information that inherently shapes the way we think about things. Emotion helps us navigate intangible and instinctive information that is not always expressed outwardly. Computers’ strength have always been at reasoning and logic. With developments in Machine Learning and Natural Language Processing, they’re getting better at sense perception and language too. The biggest challenge for computers will be grappling with emotion — something humans are innately good at.

The messy world of language and communication; image from Il Cartello

Gathering knowledge, however, is just the first step. The next level involves processing this knowledge. This requires mastery over a few other skills that centre around the application of knowledge:

  • Problem solving: understanding a scenario, the underlying rules and principles at play and developing solutions based on the prioritisation of goals
  • Creativity: drawing inspiration from seemingly unrelated fields or contexts to come up with new ways to approach problems.
  • Learning: Gathering insights from experiences and applying them to making better decisions in any given circumstance

To be holistically intelligent, also requires emotional intelligence. This enables a sensitivity towards people, emotions and relationships between people. It is probably the most complex forms of intelligence and most difficult for computers to exhibit.

There is a caveat though. Can we distinguish between a computer that truly embodies the criteria above from those that a really good at faking it? Honestly, I don’t think we will be able to tell the difference in the future. One of the pitfalls we need to be aware of is over-estimating the intelligence of machines. This is going to be big issue going forward; one that we, as designers, need to think about. Alan Turing explores a similar idea of gauging computer intelligence based on the indistinguishability of man and computer through, what he calls, “The Imitation Game”, in his paper, Computing Machinery and Intelligence. [1]

In the next few years, there is likely to be a whole new range of responsibilities offloaded onto computers. In Man-Computer Symbiosis, Licklider argues that this close working relationship with computers could help divert our intellect to where it matters most. [2] However, with increasing reliance on (seemingly) smart technology, comes a few more issues that will fall under the purview of the designer:

Designing For Failure

We’ve been outsourcing a number of tasks to computers, especially those of the repetitive and tedious kind. As we assign higher-level cognitive tasks to computers, what happens when something goes awry? As designers, we need to consider how systems, products and services behave in worst-case scenarios. In doing so, we can help make sure the things we build are safe and resilient.

Designing For Accountability

When developing new technologies, it’s often the case that the law has not caught up to address the nuances of its societal impact. Recently, this has been a hot topic for debate with regards to accountability in car accidents involving autonomous vehicles. Questions arise as to who will be held liable. Is it the passenger? Software company? Car manufacturer? All of them? After all, the autonomous vehicle is a result of combined efforts. As designers, and therefore user-advocates, it is our responsibility to pose these tough questions early on in the design process with the involvement of multiple stakeholders. The consensus reached is likely to be iterated upon, over and over, until the knots of accountability are untangled.

Designing For Uncertainty

The truth is that no one knows who a piece of intelligent technology will impact people’s lives years from now. We’re making educated guesses, at best. In the face of uncertainty, the values embedded into technology take on a much bigger role. These values will live on beyond our lifetimes and will drive how people interact with this technology and the value it brings to society.

At times, when delving into the challenges above, we may end up at dead-end; where no reasonable resolution can be found. These are valuable points for contemplation, and a time to ask ourselves, “should we really be doing this?” As we consider this carefully, we owe it to society (and ourselves) to ensure that every artefact we create, every system we build, every piece of technology we bring to life, is a reflection of the future we want to live in.

[1] Alan Turing, “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433–60.[1] J.C.R.

[2] Licklider, “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE-1, no. 1 (1960): 4–11. (Box.com)

--

--