Why we need to know what we’re dealing with when we’re dealing with Artificial Intelligence

Much is written about the pragmatic opportunities that Artificial Intelligence might offer to us, equal attention is also given to the paradigm shift this might mean at a societal level for employment / transportation / purpose / significance. Very little is explored or debated in relation to people at an individual level at this time, however. The below article represents an initial, and alternative, view point on the potential implication of an increased investment and acceptance of dialog driven human-machine interaction.

On Latency

The guiding ideal of machine communication -low latency- has become the guiding ideal of human communication. Studies of online behaviour show that as networks get speedier, people become less patient with any delay in the supply or exchange if information.
— Nicholas Carr

Latency is defined as the delay between the input into a system to the desired outcome. The achievement of ‘low latency’ has become the organising idea of progress in both computing and communication. It is jointly driven by Moore’s Law, an ‘always on’ culture of human-machine interaction, and increasingly sophisticated cloud based algorithms.

Like all technologically enabled affordances the human-machine relationship with latency is inherently a complex trade-off of sacrifices and advantages. Where as input-output networks desire and demand low latency, human-machine relational interactions seemingly desire high latency. This apparent contradiction in the governing rules of computational progress is best illustrated through example:

The latency of suspended disbelief

The 2013 Spike Jonze film Her details the relationship between Theodore (a naturally intelligent entity) and Samantha (an artificially intelligent entity). The premise of the film was originally founded upon an online exchange Jonze made with an machine driven instant messenger described as such:

“For the first 20 seconds I had a real buzz. Like, whoa, this is trippy. And after 20 seconds it quickly fell apart and you realised how it worked. It was a program. The more people that talked to it the smarter it got.” — Spike Jonze

The latency of time between belief and disbelief in the case lasted just 20 seconds, but is extended within the run time of the film for roughly 105 minutes of the films 120 minute duration, and for the apparently weeks long duration of Theodore and Samantha’s relationship within the fictional world of the film.

Theodore’s journey from belief to disbelief
Her, 12:00. Theodore’s very first engagement with the Artificial Intelligent entity Samantha.
Her, 105:00. Theodore’s realisation that his relationship with Samantha is entirely superficial.

The extension in the latency of belief/disbelief throughout the ‘Imitation Game’ of human-machine interaction is our current and primary goal for Artificial Intelligence. We are collectively living through an industrially funded determinist drive to pass the Turing Test. A such we are judging the duration of our ability to be fooled as ‘success’.

The current organising idea in the development of Artificial Intelligence is to pass the ‘Turing Test’ (to create AI communication indistinguishable from human communication to the unbiased eyes of a human test subject). This, in effect, is the achievement of a ‘suspension of disbelief’ for the defined latency period of such a test.

An alternative view point to this widely accepted ‘narrative of consensus’ might be to rather than extended the latency between belief and disbelief (to the point of indefinite belief) but to introduce disbelief immediately as regulated ethical standard. By this we might mean that all artificial entities are identified as such and labeled with explicit warnings of the potential negative implications in forming ‘meaningful’ relationships with them.

Our relationship to technology and machines is now hundreds of thousands of years old, yet we have never before related to our tools through the interface of language, dialogue and intelligence. Given the innate human trait to interpret and project meaning onto information and experiences, without practising ‘appropriate scepticism’ we are in significant danger of forming ‘superficially meaningful relationships’ with machines purely out of our natural desire to project meaning.

The creators of Artificially Intelligent technologies will surely argue for a common sense approach to human-machine interaction, citing a best of both world scenario where we use AI as a tool with a clear boundary between our use based relationships and emotionally based relationships. What they will exclude from their illustrated applications and implications will be the consideration that humans have evolved as both relationship and relational driven entities. Humans form relationships as a core strategy to thrive and survive. We each live in a part subjective, part objective, and part inter-subjective reality.

If we can form meaningful relationships with soft toys and pet animals then forming relationships of meaning with a seemingly attentive and empathetic communicating entity isn’t going to take a huge leap of faith. Subsequently where such relationships of meaning can be levered and monetised we be likely to not only see relationships as a by-product of human-machine interaction but the primary goal of our engagements.

“We become what we behold. We shape our tools and then our tools shape us”
-Father John Culkin (popularly cited as Marshall McLuhan)

It awaits to be seen how we will navigate this next challenge of ‘becoming what we behold’.