Business Challenges With Machine Learning

As we write the book Machine Learning in Practice (coming early in 2019), we’ll be posting draft excerpts right here.

Let us know what you think, give us a clap down below if you like what you read, and follow @InfiniaML and @RobbieAllen on Twitter for the latest updates!

While there are significant opportunities to achieve business impact with machine learning, there are a number of challenges too. Many of these issues are related to the sudden and dramatic rise in awareness of machine learning. I’ll talk about some of these challenges in this article and how to overcome them.

Expectations Exceed Reality

No matter how much you’re able to accomplish with machine learning, you’ll probably fall short of somebody’s sci-fi inspired ideas about what should be possible. These expectations are relatively new. We know from experience how quickly expectations around artificial intelligence have accelerated.

Before I became CEO at Infinia ML, I founded and led a company called Automated Insights where we built a product called Wordsmith. With Wordsmith, you can create human-sounding narratives from underlying data — turning reported financial statistics into publishable stories for the Associated Press, for instance, or business intelligence data from platforms like Tableau into readable reports executives can use. While we didn’t use much machine learning, we were pioneering the commercial use of natural language generation and considered an artificial intelligence provider.

When we were selling our solution in 2010, we had a difficult time convincing people to try it because of the negative connotations around artificial intelligence.

In 2010, the easiest way to end an interview early with a journalist was to mention “artificial intelligence”.

Potential customers didn’t see artificial intelligence as applicable to business, and it wasn’t something that most people could get their head around. They saw our “robot writing” solution as impossible magic.

Fast forward to 2014, after a few years of AI’s increasing prominence (including Watson’s win on Jeopardy!), and our company now had the opposite problem. Prospects wondered why our solutions weren’t even more magical. They would object that they had to provide any of their own input and expertise to set up the system — after all, shouldn’t artificial intelligence do all the work for them?

In just four years, we went from a total disbelief in what was possible to disappointment that we couldn’t do the impossible.

Today’s hype around ML and AI is both good and bad. On one hand, it’s easier than ever to talk about deploying solutions inside a company. Executives are generally receptive. On the other hand, some people’s expectations of what machine learning can do in practice can far exceed what is possible or even reasonable.

For example, there have been numerous advances around image analysis and object detection. People hear about Facebook’s ability to detect faces, or Google’s ability to recognize specific dogs and cats. Progress in this area has been stunning and apparent.

Meanwhile, progress on text has been slower. The advances around imaging have perhaps built up an expectation that things should have moved faster than they have in areas like natural language generation. Today, fully automated text generation doesn’t generate anything even close to human-level quality. Text generation is at the outer limits of what’s possible today, and it’s one of the harder problems to solve because text is much less structured than images. There are many languages, each with their own rules. Many of those rules aren’t quantified in a measurable way. There are good tricks for learning rules, but in general it’s a difficult challenge.

Gartner Hype Cycle from July 2017

Gartner’s Hype Cycle has shown machine learning on the rise for a couple of years now. By 2017, it was at the peak of expectations, meaning it was set to fall down into the trough of disillusionment. I’ve been thinking for the last three years that we’re at peak AI. Granted, I continue to be wrong — but I expect a business backlash around AI in the not too distant future. Some of that backlash will be due to failed projects, like IBM Watson’s inability to deliver for the MD Anderson Cancer Center. Incidents like the autonomous car from Uber killing a pedestrian also start to fuel the backlash.

According to Gartner at least, hype cycles have a standard pattern: people buy into the hype, they get excited, but a human’s attention span is limited. After a while, once they haven’t seen the fully autonomous cars or Star-Trek-like computer interactions they’ve been promised, they start to become doubtful. Failed projects reinforce their skepticism, and people inevitably believe that this AI stuff isn’t all it was cracked up to be.

As an AI and ML entrepreneur, I welcome the backlash. Not only will it help bring expectations to a more rational level. It will also help reduce wage inflation has been going like crazy for employees in the AI space. Such wage inflation is a core issue of the next challenge.

The Talent Gap

One major machine learning challenge is finding people with the technical ability to understand and implement it. This ongoing problem contributes to a backlog of machine learning inside the enterprise. Machine learning is at a point now where it can deliver significant capability, but if you don’t have people that can implement it, then all of the opportunities go unrealized. In fact, there’s at least a ten-year backlog of machine learning projects locked inside large companies, waiting to be set free. Every year that these projects pile up, the backlog gets worse.

One consequence of high demand and low supply in the market for good data scientists is the explosion of salaries in the space. What you could’ve paid for a data scientist four or five years ago might have gone up by 50 percent just a few years later.

To be sure, it’s not overly challenging to find someone with “data scientist” on their resume. The question is whether they do basic machine learning, let alone the more advanced machine learning and deep learning that some of the toughest data problems require.

Unfortunately for hiring managers, the term “data scientist” is a highly flexible term and, if data scientists really have “The Sexiest Job of the 21st Century”, candidates have plenty of incentive to use it in their job title.

Data scientists can be highly published Ph.D.s, fresh graduates of a master’s degree program, or just anyone who took some online courses about machine learning or data mining in their free time.

The more simplistic techniques around machine learning might be easy to learn quickly. More complex versions of machine learning, especially deep learning, require significantly more training. I believe ninety percent of data scientists could not pass a deep learning algorithm implementation test.

Moreover, since putting machine learning into practice often requires software engineers to build out robust, repeatable systems, data scientists also need at least some programming knowledge to make business impact. This is an even rarer find. Many data scientists who are academically trained in machine learning may lack the experience working in a software development environment that requires people to collaborate. You might find candidates who know data science part of it and not as much on the programming, or who do know the programming side well but just know a little bit of the data science part.

Expensive Computational Needs

To achieve any sort of large scale data processing, you need GPUs , which also suffer a supply and demand issue. Even large companies don’t necessarily have GPUs accessible to the employees that need them — and if their teams are trying to do machine learning off of CPUs, then it’s going to take longer to train their models.

Even with GPUs, there are many situations where training a model could take days or weeks, so processing times still can be a limitation. This is different than traditional software development, where programs may take minutes or a few hours to run, but not days.

It’s fine for some models to take time to train, as long as results are served quickly in a production environment. A bigger challenge arises if you need to retrain or update the model often. Say you’re getting new data every day that you want your model to incorporate. But what if a fully trained model takes a week? The model can’t stay up to date with the latest data coming in. That’s not an uncommon problem — the rate data coming in is faster than the rate at which they can retrain the model.

Black Box Answers

Some people want to know why machine learning models make certain decisions. Why was a user served a certain ad? Why was a contract interpreted in a certain way? Why did the car move in the way that it did?

There’s an underlying belief that people should be able to explain why machine learning algorithms and other software took certain actions. That’s a fine goal in theory, but it sets the bar far higher for software than the one we set for ourselves. That’s because humans are not interpretable either.

Just look at the studies about false memories, and people’s inability to explain why they made certain decisions. Or consider how people make decisions before becoming consciously aware of having made a choice. Human decisions are impacted by factors they are simply not aware of. This comes up in financial services, where some want to know why an algorithmic trade was made. The presumption seems to be that people could have objectively made those same calls — I don’t think they can.

This relatively recent backlash takes the position that if we can’t explain why a system made a decision, so we shouldn’t use it. To take an extreme and tragic example, a self-driving car hits a pedestrian. But if you had a person in that same position, can they really explain why they did it? They might report being lost, or dazed, or distracted. Is that the real reason? Does the driver even know the real reason in their own mind?

Perhaps it’s even worse with people — at least we don’t have to worry about software being intentionally deceitful.

Nonetheless, some people get all hot and bothered about the fact that we can’t explain why algorithms are making certain decisions. This is largely a deep learning problem — inputs come in, various weights are applied to them, but you don’t know what triggered a certain outcome.

In the case of a failure, executives and policymakers would like to know which throat to choke by understanding which person or entity is ultimately responsible for the problem. For example, who is legally responsible when an autonomous car hits a pedestrian? The human in the driver’s seat who didn’t have control, but perhaps should have taken over at that moment? Is it the car company that made the car, the software maker that made the software that went in the car or is it the car sharing service?

There’s no doubt that this is a tricky moral and legal challenge to untangle, but I’m not as bearish on this challenge as others might be. Society has successfully found ways to assign responsibility in the past. For example, there’s a clear legal line of responsibility if your car has a wiring issue, the car blows up, and someone dies. Someone has figured out the answer to that. The idea of assigning responsibility isn’t a new problem.

Data Hungry

Supervised learning is the predominant technique in machine learning. It requires not just data, but labeled data. That is, data providing the answer on a variety of inputs so that it can predict what future outputs should be. The availability of labeled data is a significant challenge for some machine learning projects. In fact, it restricts the problem space quite a bit. The Big Data phenomenon over the last 10 to 12 years may have led companies to do a better job collecting data, but they don’t necessarily have that data labeled.

One challenge is that labeled data isn’t naturally occurring for the most part. It’s a bit easier to create with quantitative data, where answers can be computed or inferred from the data itself. That’s not the case with image data, for instance — there’s nothing inherent to a group of pixels to tell an algorithm that it’s a cat. Training the algorithm requires a human to first label the cat.

Meanwhile, unsupervised learning has its own data struggles. In this case, there are no answers provided in a training data set, and algorithms must find answers on their own. This requires a significantly more data than supervised learning, and unsupervised learning problems tend to be harder and harder to wrap machine learning around. The techniques aren’t quite as straightforward as supervised learning. Thus, it hasn’t been applied as much in the business context.

There are also numerous discussions around techniques that don’t require as much data. Machine learning — and especially deep learning — are often called “data hungry,” meaning it takes lots of data to make the solutions work. Researchers are trying to figure out how can we bypass or minimize that hunger, or at least more effectively feed it. One approach has been to use a small data set and automatically create new, similar data.

The Good News: These Are Solvable Problems

A lot of machine learning problems get presented as new problems for humanity. But in most every case that’s not really true. Machine learning challenges can be overcome:

  • The hype around machine learning will be sorted out by market forces over time.
  • The short supply of talent will be solved by market forces and increasing automation.
  • Technological developments will boost processing speeds.
  • People will eventually accept the fact that they can’t fully understand every decision a machine learning algorithm makes, just as they can’t fully understand decisions humans make.
  • New technologies and techniques will help companies create more of the data they need and/or reduce the amount of data they require.