Saying “you need people to interpret data” is fine, until AI becomes less terrible

Sandy Rogers
Saberr Blog
Published in
3 min readSep 23, 2016

When an event popped up at the Royal Institution called “The science and art of high-performing teams”, we obviously had to go!

It was billed as a multi-disciplinary discussion on marginal gains and the never-ending chase of elite performance.

The opening discussion was a delight for us, because they made all the arguments that we usually have to make ourselves:

  • teams make a significant contribution to the performance of many organisations and their collective performance is critical
  • performance analysis (and therefore improvement) has had a big injection of progress recently. The reason for that is availability of data.

But then I felt the conversation stalled.

They arrived at a consensus that:

understanding people needs a mixture of data and people to interpret that data. Or at least, you need that mix to take any smart action based on what you’ve learned.

It’s an easy escape to say that. It feels right. It feels comfortable, those in control are still people, and still able to override the data if they need to. It makes sense when you think how limiting the technology in our phones is… “God, we better still be in control because Siri is a moron”.

But this ignores the future. Many people (and a whole load of investment dollars) are betting that in the next few years we’ll iterate towards artificial intelligences which, to be useful and achieve the dreams of their creators, better be sufficiently human-like to not make us go mad with frustration.

As we put more AI in more places, physically or otherwise, we expect more from it. Witness the early reviews of Google’s latest general-purpose AI, Google Assistant. Now Google’s got itself into your chat conversations, you’ll try and talk more naturally to it/he/she and expect her to know what you’re on about. And you’ll be unforgiving if she doesn’t.

Creativity, intuition, interpretation and other big words we like to apply only to people are going to be checked off by robots if we’re ever to make them less annoying.

So sure, it feels cosy knowing that they need us. But we want them to help us, and we want them to be mildly less infuriating each day. And so the more futile our protectionist claim to “we are the only intellectuals” becomes.

You know that dream you’ve been sold, where you say “Gees, I could sure do with a holiday”, and your phone figures out just what you want (hint-hint: it isn’t just the destination I last googled), what you can afford (not just 33% of my bank balance), and when your work is going to allow you to take it (not just when my calendar is blank)… well that same dream is going to be able to hear your team say “Gees, we haven’t done ANYTHING this month” and figure out why.

A lump of data isn’t information without knowledge and context. And information doesn’t lead to action without some convincing. Right now that all means people are just super-powered by having data to make their decisions. But that will change one day.

--

--

Sandy Rogers
Saberr Blog

Reformed astrophysics researcher, recovering marathon runner, and recalcitrant data wrangler @SaberrUK.