The Automation Maturity model — Marty, where is my flying omniscient personal assistant?

Colin Kelly
3 min readFeb 1, 2017

--

Marty McFly in Back to the Future (Universal Pictures, 1985)

As stories of robot teachers (2010), AI game-show winners (2011), Big Data election victories (2012) objects communicating with one another (2013), self-driving cars (2014), robot-run hotels (2015), delivery drones (2016) have become the new normal, the outgoing Obama administration has made policy recommendations premised on a future American economy deeply affected by AI-driven automation, with 47% of US jobs estimated to be at high-risk of computerization in the next 20 years.

It’s perhaps therefore unsurprising that there is an abundance of words, terminology and concepts floating around the automation and artificial intelligence sphere, often achieving ‘buzzword’ status and cited in isolation — when really they ought to be considered in concert.

We can think of automation as being the consolidation of a number of such ‘concepts’, intertwined and often related to one another: this is my take on how some of these fit together and the level of maturity that flows therefrom.

Automation Maturity model

We begin by plotting the number and complexity of tasks performed (X-axis) as part of each ‘concept’ against the extent to which the behaviours exhibited are ‘human-like’ (Y-axis).

Next we demarcate a zoning of primary “operation modes”: concepts relating to direct interaction with humans (mode of interaction in green); gaining and producing knowledge (thinking in purple); and, finally, interaction with other systems (machine-machine interaction in orange).

Finally as we move further from the origin along both axes, we can slice out a progression of ‘automation maturity’ stages derived from the previous two steps, enabling us to isolate the necessary components for achieving a certain level of maturity:

  1. Rules-based automation includes pre-defined, prescriptive rules for simple decision-making: this is the classic era of bits and bytes computation.
  2. Advanced virtual agents includes ‘dumb’ chat-bots and some specialist-task robots e.g., automated e-passport gates at airports and your website retail “how can I help?” assistant.
  3. Cognitive agents are where things start to get interesting: immature examples might include Siri/Cortana but even these struggle with some of the more human-like exchanges — to fully tick the box of ‘cognitive’ we should be expecting a rich interaction experience across a range of tasks with nearly always-correct, well-sourced outputs.
  4. ‘Real AI’ is the holy grail — indistinguishable from humans in interactions, hyper-intelligent and able to connect to the right entities at the right time. This zone introduces not only a world of exciting opportunities but a whole range of ethical and philosophical quandaries (cf. Asimov’s laws).

The diagram isn’t designed to be comprehensive in terms of concepts, but I hope it might be a helpful way of framing the abundance of buzzwords in this space and how ‘advanced’ new products and technologies really are.

Dr Colin Kelly is interested in how analytics and artificial intelligence can influence policy, shape society and transform the way organisations do business. He holds a BA in Maths and Computer Science from the University of Oxford and an MPhil and PhD in Natural Language Processing from the University of Cambridge.

With thanks to Sam Mackenzie and Nicholas Borge for their input.

--

--

Colin Kelly

Dr Colin Kelly is an advanced analytics consultant, passionate about how analytics and artificial intelligence can influence policy and transform organisations.