An umbrella made of concrete — an art piece from Katerina Kamprani
Another mind-tickling art piece by the Greek 3D-artist Katerina Kamprani.

Risk of Rain on the Automation Parade?

If ML models make the wrong decisions, who is at fault? Can ML models be governed? Do they need to be governed somehow?

Eva Nahari
DNX Ventures Blog
Published in
3 min readApr 1, 2021

--

I know in previous blog posts I have been sharing my excitement about more automation vs. bells and whistles. I strongly believe automation is the new usability movement and that automation is a welcome and timely relief for over-stretched teams. Especially during a pandemic, the need for “always on” has increased, and the need for service efficiency is ever increasing in a world where teams are more frequently distributed geographically. It is just easier to get things done via a service than waiting for other time zones to wake up to help push processes forward. So, no doubt, I am a big fan of automation services!

But let me peel the onion a bit. Where does automation stem from? Automation is in many cases only possible because you have trained a model based on human decision patterns or tons of collected data; not in all cases, but in many. For example, train an ML model over the language used in fraudulent claims, to score incoming claims — based on language and word nuance usage — which ones are more likely to be fraudulent. In this way, you have automated the steps of human sampling, or at least expedited that process to hopefully create more accurate sample sets.

From there you can automate which action to trigger — send this fraudulent claim to human processing, or send it directly, after further automated processing steps, to an immediate action to decline (or legal action). The end vision, of course, is to avoid spending valuable human time on claims that are a waste of time and to avoid the payout and costs of legal action against fraudulent claims later on.

This is the nirvana for many organizations, to offload human tasks where mistakes can be costly, or where there is a better use of human time doing other things. Automation is growing as it helps business decisions become more accurate and acted on faster, allowing us to use humans where humans are most valuable and useful.

All good so far, but now let’s say the machine model was faulty. What if it somehow had gotten contaminated or trained on overly biased data, or data that was not supposed to be allowed in the process of training that model? Or what if a trained model works for one business unit’s use case, but makes the wrong decisions for another business unit? Who is responsible? Is there a need for governing ML models? I have just started noodling on this and may find myself enlightened in a few weeks, most likely in which case I will write a follow-up blog. But for now, I shall leave it a bit open-ended. I am not sure if this is a real problem or not — please share your thoughts. Often my enlightenment comes from entrepreneurs that have noticed the challenge already and have started working on their startup. Feel free to hit me up, if you have! :)

--

--