If It’s Not Perfect, Don’t Do It

If at first you don’t succeed, try, try again.

It’s a great philosophy for life, but is it a good philosophy for machine learning? Ramtin Seraj isn’t so sure. He believes that deep learning — the kind of machine learning currently preferred by companies like Google and IBM, which allows computers to teach themselves — needs to steer clear of tasks that require perfect accuracy. He sat down with us this week to explain why.

Google has recently announced that they’re teaching algorithms to encrypt text messages, with the goal of automating both encryption and decryption. This is a lofty goal, and it will no doubt take a long time before they reach 100% accuracy in these trials. The problem comes about if they never hit that 100% mark.

Microsoft recently announced they had achieved better-than-human results in speech recognition. That’s amazing, but humans actually only hit about 80% accuracy. For voice commands, that’s a solid margin of error. After all, if the computer thinks it didn’t understand you, it can do as Siri does: “I’m sorry, I’m not sure I understand.”

But what if the task set before the computer is something with no room for error? What if algorithms are performing surgery unassisted — or handling sensitive data? In cases like these, a 20% accuracy rate could be literally fatal.

As Ramtin Explains, “While it’s highly unlikely that Google will solve out these encryption algorithms without proof that they work, that proof is very hard to get. Using test data to train or validate your machine learning models means you can’t prove it will work for future unseen data or cases.” In essence — you never know for sure until it goes live. That’s why you end up with instances like Microsoft’s Tay, which worked so well in a closed environment but couldn’t survive when up against the human factor.

But in this nascent field, there’s a lot to lose in the war of public opinion.

If a big, public encryption trial failed, and everyone’s data was potentially exposed, it could cause serious problems for deep learning as a buzzword, potentially causing deep learning projects to lose funding and impacting machine learning as an industry.

So how do we solve this problem? Seraj suggests we steer clear of tasks that require 100% accuracy — at least for now. Give machine learning a chance to grow.

Or it might never reach adulthood.

— —

written by Wren Handman for Leviathan.ai

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.