No Gear Consulting
Published in

No Gear Consulting

What’s stopping AI from making tough, judgemental decisions?

Photo by Juan Rumimpunu on Unsplash

It will likely come as no surprise to you when I say Artificial Intelligence (AI) is all around us: recommending what we should buy or watch next, aiding the diagnosis of medical conditions or showing us who we might want to follow or ‘friend’. All noble exercises and tremendous amounts of research and work has gone into each (even if you may not feel like it has when Amazon recommends you buy another toilet seat…), but what is stopping AI from making even smarter decisions?

For AI to work effectively (and excuse the gross over-simplification), all it needs is a large volume of high-quality input ‘training’ data from which to learn and an output to match that data to. From here the AI can ‘learn’ which patterns of behaviour result in which outcomes.

For example, to decipher human handwriting into text we start by providing the AI with reams of handwritten data, telling it what this data says. The AI then uses this ‘knowledge’ to try and guess what new ‘unseen’ handwriting might be saying, which we will then confirm is correct, or not (this is essentially the same method Google have used to ‘read’ all of the old books in the world — using those annoying little ReCaptchas you do to prove you aren’t a robot).

Now, back to tougher decisions. So, we need good input and output data in order for our AI to ‘learn’. This is all well and good when that data is collected and recorded digitally, unfortunately, that isn’t how most ‘professionals’ (i.e. doctors, accountants, lawyers etc) work. They are all notoriously bad at concisely ‘showing their working’ of how they came to a decision.

Take the example of an accountant coming to a difficult, judgemental accounting decision, in order to come to that decision they may have had to factor in: accounting standards, regulations, legislation, industry precedent, financial statements and conversations with myriad other experts within their firm. All of these pieces of information would have had to have been processed and applied in the context of the specific case. The conversations alone could have been verbally in person or on video calls, over IM or email, further adding to the complexity of gathering the input information.

Once all of this has been processed and distilled, however, often the output is a simple “yes” or “no” as to whether something complies or doesn’t, the same goes with legal decisions: “guilty” or “not guilty”, and with medical decisions: “ill” or “not ill”.

Sometimes these outputs are recorded in one place (but not necessarily) so we have some semblance of an output we could use for our AI, but the problem lies in the capturing of the input data used to come to the decision. Herein lies the blocker to AI solving the ‘tougher’ more judgemental problems.

Until we solve this problem, we won’t really learn. In my eyes there are two solutions, neither easy, nor pretty but, absolutely necessary to scale expertise quickly and efficiently:

  • Come up with an intuitive, unobstructive, consistent way for people to document their rationale for coming to decisions, using detailed tagging
  • Creating a tool that captures, parses and links content and interactions in an intelligent way so all of the information made available to the decision maker can be logged and learned from

As you can see, neither would be simple (I’d argue the first maybe even more difficult than the second, as changing human behaviour is often more difficult than building tools) but the pay-offs would be immense.

In accounting you could massively reduce overheads and speed up decision-making, possibly even reducing some decisions to chatbots accessible by anyone. In law, you could cut times (and hence fees) and start to build more efficient law ‘FAQs’ where knowledge starts to stack up and build instead of the same decisions being made by different people all over the world. Finally, with medicine, you could provide a standardised care wherever you were in the world for a fraction of the cost.

It’s a big, hard problem, that I don’t believe is being tackled at the moment, but for sure one worth tackling!

If you’ve got any thoughts on this, comment below or join the conversation here.




No Gear Consulting wants to make the world a better place through the thoughtful applications of technology, designing of human-centric user experience and delivery of innovative leadership and management of people.

Recommended from Medium

The Sooner You Get Your First AI Job, the Better for Your Career → NEAR Protocol

How to make chatbots converse like humans!

Surprising Global Opinions on AI

Artificial intelligence improves control of powerful plasma accelerators!!

An Antidote to Automation Anxiety: Intelligence Amplification

Key Findings of the Creative Commons Working Group on Copyright and AI

4 Surprising Sectors in Which AI Would Be Able to Help Humanity

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Michael Heap

Michael Heap

Entrepreneur/Founder (Tmup, Devas) startup and innovation consultant (No Gear Consulting) and fascinated by all things tech

More from Medium

How To Protect Your Recruitment Chatbot from Falling Prey to AI Bias

The evolution of attitudes: how the pandemic redefined approaches to AI

AI Integrity: Leadership Lessons from Other Industries

Entropy generation in a system