What’s stopping AI from making tough, judgemental decisions?
It will likely come as no surprise to you when I say Artificial Intelligence (AI) is all around us: recommending what we should buy or watch next, aiding the diagnosis of medical conditions or showing us who we might want to follow or ‘friend’. All noble exercises and tremendous amounts of research and work has gone into each (even if you may not feel like it has when Amazon recommends you buy another toilet seat…), but what is stopping AI from making even smarter decisions?
For AI to work effectively (and excuse the gross over-simplification), all it needs is a large volume of high-quality input ‘training’ data from which to learn and an output to match that data to. From here the AI can ‘learn’ which patterns of behaviour result in which outcomes.
For example, to decipher human handwriting into text we start by providing the AI with reams of handwritten data, telling it what this data says. The AI then uses this ‘knowledge’ to try and guess what new ‘unseen’ handwriting might be saying, which we will then confirm is correct, or not (this is essentially the same method Google have used to ‘read’ all of the old books in the world — using those annoying little ReCaptchas you do to prove you aren’t a robot).
Now, back to tougher decisions. So, we need good input and output data in order for our AI to ‘learn’. This is all well and good when that data is collected and recorded digitally, unfortunately, that isn’t how most ‘professionals’ (i.e. doctors, accountants, lawyers etc) work. They are all notoriously bad at concisely ‘showing their working’ of how they came to a decision.
Take the example of an accountant coming to a difficult, judgemental accounting decision, in order to come to that decision they may have had to factor in: accounting standards, regulations, legislation, industry precedent, financial statements and conversations with myriad other experts within their firm. All of these pieces of information would have had to have been processed and applied in the context of the specific case. The conversations alone could have been verbally in person or on video calls, over IM or email, further adding to the complexity of gathering the input information.
Once all of this has been processed and distilled, however, often the output is a simple “yes” or “no” as to whether something complies or doesn’t, the same goes with legal decisions: “guilty” or “not guilty”, and with medical decisions: “ill” or “not ill”.
Sometimes these outputs are recorded in one place (but not necessarily) so we have some semblance of an output we could use for our AI, but the problem lies in the capturing of the input data used to come to the decision. Herein lies the blocker to AI solving the ‘tougher’ more judgemental problems.
Until we solve this problem, we won’t really learn. In my eyes there are two solutions, neither easy, nor pretty but, absolutely necessary to scale expertise quickly and efficiently:
- Come up with an intuitive, unobstructive, consistent way for people to document their rationale for coming to decisions, using detailed tagging
- Creating a tool that captures, parses and links content and interactions in an intelligent way so all of the information made available to the decision maker can be logged and learned from
As you can see, neither would be simple (I’d argue the first maybe even more difficult than the second, as changing human behaviour is often more difficult than building tools) but the pay-offs would be immense.
In accounting you could massively reduce overheads and speed up decision-making, possibly even reducing some decisions to chatbots accessible by anyone. In law, you could cut times (and hence fees) and start to build more efficient law ‘FAQs’ where knowledge starts to stack up and build instead of the same decisions being made by different people all over the world. Finally, with medicine, you could provide a standardised care wherever you were in the world for a fraction of the cost.
It’s a big, hard problem, that I don’t believe is being tackled at the moment, but for sure one worth tackling!
If you’ve got any thoughts on this, comment below or join the conversation here.