Artificial Intelligence on the Edge

Brian L Garr
3 min readMay 4, 2017

Having been in the AI industry for 22+ years, I’ve seen AI put in a lot of places; the shelf, the trash, and places perhaps best not mentioned in public. But something happened right around the turn of the century when machine translation and speech recognition developers started using Hidden Markov Models (HMMs) to help computers make decisions about n best results. The idea was that the more data you put in, the more accurate the results, and, to a degree, that was right. Then we started realizing that statistics only helped when the results were the “normal” results, but couldn’t anticipate the abby-normal (with attribution to Young Frankenstein). Semantics was the answer, and some, like Cognitive Code (full disclosure, I am their COO), adopted the addition of semantics to their model, which greatly improved accuracy.

So now we have all these cool gadgets, like Alexa, Home, Watson, Siri, etc. The one thing that these AIs all have in common is that they all need the cloud behind them to hold enormous processing power and data. The “trigger word” products like “Hey Google”, and “Alexa”, are listening in and who knows how much they keep or send back to George Orwell for processing and analysis. If you are an Alexa owner, have you ever sat there in your den, or kitchen, and out of know where Alexa says “I’m sorry, I didn’t understand your question”. What? I didn’t ask Alexa any question! Why is she awake? Is she planning a hostile takeover of my house?

I was having lunch with a VP from Visa International a few months back, at a conference, and we were talking about the threat of all of our profiles, passwords, and more, that are kept in the cloud, on these massive servers that have to authenticate us when we log into our bank or credit card, to prove we are who we say we are. Well, that’s a problem, because we all know stories about these big caches of data being hacked, and then getting the letters from the hacked offering free monitoring service for our credit cards. The answer, which is really quite elegant, is to put this information “on the edge”. What does that mean? Well imagine that your phone had the ability to test your fingerprint, and your iris to make sure that you are real, and alive, and then the only thing that your phone had to do is send a message to your bank saying yep, it’s Brian! That’s called being authenticated on the edge. The bank only has to be aware of whether the message from your phone is authentic.

Putting AI on the edge is the same idea. Put the AI engine and data on the phone or on the device, so the only time the AI has to go to the cloud is to find answers from Yahoo about weather, or NYSE about stock prices. There is no connection between the AI and the cloud. Everything computes on the device. The technology exists today to do that. Maybe not from Amazon and Google, but companies like Cognitive Code already offer full AI experiences right from the device. It’s not easy, and I’m not saying that others will come around quickly, as Cognitive Code has multiple patents around the technology that allows us on the edge. But it is doable, and think about how much more secure you will feel when your child’s toy can’t send her/his voice back to our friend, George Orwell, to do with as he pleases. Think about it.

--

--