The AI revolution will be different from the microprocessor revolution

Andrew Kemendo
3 min readJul 27, 2016

--

This post stems from a conversation I had recently on Hacker News about a recent article from Bruce Schneier.

Executive Summary: Reinforcement Learning (RL) is going to reshape how we build software and businesses in the near future. This will make it increasingly difficult to start, compete with and disrupt incumbents the same way the microprocessor revolution did

The past few years of progress in machine learning, specifically reinforcement learning have been exciting with some breathtaking results that have been well publicized. Prior to a year ago however I never really thought Q-learning or Policy Gradients were huge breakthroughs that would put us on a truly revolutionary path; more that they were steps in the right direction. My mind is now changed.

Stated simply: I believe a Reinforcement Learning Architecture will replace explicit instruction based Architecture for building software within a decade.

Reinforcement Learning tasks rely on ridiculous amounts of data. Whereas with traditional software architecture, where you accomplish tasks through explicit task instruction, RL trains for tasks based on millions of tests through a reward system. Most importantly once you have trained it to some minimum level, if you deploy it correctly, then it should continue improving — so long as you bake feedback into the UX. Imagine that instead of telling excel what to do, you and every other user will have a conversation with excel, improving the system incrementally.

Imagine that instead of of telling excel what to do, you and every other user will have a conversation with excel, improving the system incrementally.

These are simplifications of course, and reducing complex decision systems to a series of reward goals is not trivial but the general idea that software architecture is going to move from “do this exactly” to “improve constantly” I think is valid.

Perhaps for very simple tasks, small teams will be able to build the data they need to train some early systems. I think however that the more likely outcome is that massive companies with huge data sets and throngs of amazing ML people like Google etc…will be the ones who can take advantage of these systems. Google is already doing it with their power systems and untold other number of projects nobody knows about that would have otherwise been done by a niche company.

This has some pretty staggering implications on how small companies will be able (or not) to compete with or disrupt existing players going forward. People have been saying that “Big Data” is dead, but they hadn’t considered how critical that data is for RL.

The microchip revolution democratized computing by giving hobbyists and hackers the power that only massive companies had. RL Software architecture can’t be democratized without massive data sets. And only a handful of companies have such sets. Unless these companies open source their data sets (they won’t — for many reasons, including legal ones) then we as the small timers have to build our own — a challenge much more massive than it looks. By the time we can, the big players will be able to scoop us up or use their existing scale to out compete.

Disruptive innovation historically been niche players finding gaps in processes (or markets) and then moving faster and more iteratively than big players to patch (or fill) them. If the faster and more iterative path forward is by leveraging Exabytes of data and hundreds of Machine Learning engineers — then the big players are going to have a massive advantage from here on out. And that is a big deal.

--

--

Andrew Kemendo

Chief Technology Officer @KesselRunAF. Prev: CEO, @Pair3d (Acquired 2018). Compulsive Measurer