5 Things You Should Know If You Want to “Do AI”

Susannah Shattuck
DataDrivenInvestor

--

So far, it seems like 2019 will be the year that artificial intelligence truly goes mainstream. We are talking about AI more than ever before, and the media coverage has “become less neutral and more positive,” according to the 2018 AI Index Report. 81% of the general public thinks that AI “constitutes the next technology revolution,” according to Edelman’s 2019 survey. It seems like everybody wants to “do AI” — but many folks are confused about where to start, or what that phrase even means.

Here’s the thing: leveraging AI successfully is very, very hard. A 2018 study by the MIT Sloan Management Review found that only 18% of organizations have extensively adopted AI within their offerings and processes — a number that seems high to me, based on my experience in the field. I’ve spent the last few years helping Fortune 500 companies across every industry identify use cases for AI and operationalize their AI efforts. I’ve learned quite a few things along the way, and I’ve seen my fair share of successes and failures.

I am one of those 81% of people who think that this technology is going to change completely the way that we work and live. And so I think everybody should be thinking about and playing around with AI — not just tech companies with data scientists to spare. Here are the top five things that anybody who wants to work with artificial intelligence (machine learning, deep learning, neural networks, GANs, etc. etc.) needs to know. Use these ideas to go out and build great things, or push back on the things that you don’t think should be built.

  1. Start with the outcome, not the technology.

I’ve heard the same story time and time again: somebody in upper management comes to their team and says, “I’ve heard this AI thing is pretty cool — let’s start using it.” Then, the team scrambles to put together some kind of AI action plan that will satisfy this important person by injecting AI into any and every business process possible. In my experience, this approach will almost always result in failure to deliver any meaningful outcomes.

With shiny new technology, it’s easy to get caught up in the hype cycle and jump on the bandwagon without thinking critically about where that bandwagon is going. But using technology for technology’s sake is never a good idea. (For evidence and a good laugh, take a look at one of my favorite Twitter accounts, which shares examples of unnecessary Internet-of-Things-connected devices making life more difficult.)

An AI project without a clear goal is going to end up a lot like this robotic head—completely useless and a strain on the other systems with which it interacts.

Just as with any project, you need to start your AI journey by identifying the outcomes you hope to create. Machine learning may or may not end up being the best tool to help you achieve those results — but there’s no way for you to evaluate what that best tool is if you’re not clear on your goals. This may sound like obvious advice, but it also may be exactly what your boss needs to hear after they’ve watched a really cool TED talk about neural networks.

2. Your project is only as feasible as your data is accessible.

The technology that we consider “AI” today, at its core, uses data to recognize patterns. Machine and deep learning models are only as good as the data you use to train them to do this pattern recognition. As you think about the outcomes you want to achieve with your AI project, you need to think in parallel about the relevant data. For example, if you wanted to build an AI application that will automatically route customer support tickets to the correct department to decrease response time, you will need a dataset of customer support tickets, each mapped to the relevant department.

But here’s the million-dollar question: where do you get good data to create an awesome machine learning model? There are three basic types of data: data you own, data that is publicly available, and data you can buy. Data you own is great because it’s free and it’s proprietary — meaning that whatever models you use it to train will not be easily replicated by a competitor, if they don’t have access to similar data. Publicly available data is also free, but it’s available to anybody, so it’s harder to build something that your competitors won’t be able to copy. Buying data is not ideal, but in a case where you have identified a high-value application, it might make sense. Make sure you take data costs into account as you’re developing your project plan. Become best friends with anybody in your organization who has “Analytics” in their job title; they’ll know where the good data lives and where there be dragons.

3. Your project needs broad stakeholder buy-in.

AI projects need a diverse team working together to succeed.

Any major investment of money and time needs an executive champion, but AI projects have a broader need for cross-organizational buy-in to be successful. Without participation from the people who are going to be using your tool — or impacted by its work — you will not be able to build and deploy an effective AI application.

Let me explain by returning the example of a customer support routing system. You need training data to build the machine learning model that will automatically route those support tickets, and that data has to be labeled. In other words, somebody needs to go through a huge number of historical tickets and markdown which department each one should be sent to, based on its content. This labeling must be done by a team of subject matter experts — in this case, members of your support team who handle these tickets every day. Without their expertise, even the best team of data scientists can’t build a model that will do what you want.

This is, however, where things start to get a bit sticky, and your AI project turns into an exercise in change management as much as it is in technology implementation. The customer support team may be concerned about the impact this new application is going to have on their workflow — or even worse, they may think that you’re building a system that will ultimately be used to replace them entirely. They may be reluctant to help you build such a system, and they certainly will be reluctant to use and trust it when it is deployed into production.

The best way to get around this issue is to build AI systems that make those subject matter experts’ work easier, not redundant. Think critically about how your application is going to impact the lives of the people upon whom its construction fundamentally relies. Get (and use!) design input from the humans who will be impacted by this technology as a means to build trust and buy-in.

4. Identify potential risks at the start of your project.

I once heard someone propose that every technology company should have a “Chief Skepticism Officer,” whose sole job would be to poke holes in potential projects to ensure that nothing truly disastrous is released into the world. In addition to being the most awesome job title of all time, this role is actually essential for AI projects. I would argue that any company working with AI needs to cultivate a culture of skepticism that helps identify issues before they create terrible outcomes and become terrible headlines.

Steve Harvey is illustrating the kind of skepticism you need to examine your project for flaws.

Anybody working with AI should be not only aware of but also actively on the lookout for potential problems with their models and applications. The ability of AI to amplify societal bias on a massive scale, the susceptibility of AI to being “hacked” by adversarial attacks that prey on its weaknesses, and the privacy concerns associated with training models on individuals’ data are all valid risks that you must thoroughly explore during the development and deployment of an AI-powered system.

Write an AI risk management checklist for your organization. Work with your legal and compliance teams to make sure that you’re not overlooking any key regulations that are relevant to your customers and their data. Bring in validation teams to check your work. Continue to monitor your models after they’re deployed, to make sure they’re behaving as expected.

5. An AI project is never finished.

So you’ve built and deployed an AI-powered application — now what? Congratulations, but I have some bad news for you. No AI model survives its encounter with the real world; your model, which was trained at a point in time on a snapshot of real-world data, is going to begin to degrade in accuracy and performance the moment it is released into the wild.

This degradation is called “concept drift,” and it can be a major issue if not kept in check. Concept drift happens because the real world is changing all of the time; customer behavior changes, markets shift, competitors rise and fall. To keep up with all of this change, you need to continually update your AI models with new training data that reflects the world as it is. There’s no true end to an AI project, if you’re doing it well — just a continuous lifecycle of feedback and retraining.

--

--

Head of Product @CredoAI, focused on building tools that help organizations operationalize Responsible AI.