Member preview

Understanding How To Implement an AI Strategy

It’s not a bridge to nowhere, you just can’t see the other side. Total metaphor FTW.

There’s a growing gulf between those who understand artificial intelligence and those who don’t. I think I’ve found a way to bridge the gap, and I hope to start doing that in this post.

Don’t get me wrong. It’s not like the science of AI has drifted beyond the intellectual grasp of mere mortals — although it is sold that way more often than not. What I’m talking about here is actually the opposite.

AI and its various flavors, including everything from Deep Learning to Alexa, has filtered into the mainstream rather aggressively over the last five years. As a result, the number of organizations who want to implement an AI strategy has exploded. This is the demand side for AI. Meanwhile, the number of people who “do” AI is growing almost as rapidly, their level of real-world experience notwithstanding. This is the supply side.

However, while both the supply and demand side for AI have seen such rapid growth, the number of people who understand how to implement an AI strategy has grown at a much slower pace. This is where the gulf lies, between supply and demand.

In my last post, I talked about bad AI and why it’s so prevalent in business, and it’s due to this gulf. More specifically, it’s due to the lack of technologists and tools that can bridge the gap between the science of AI and its successful application.

In other words, we can’t wait to buy and sell AI, but we rarely know how to implement it.

I get this. I totally do. In the eight years I spent at Automated Insights, where we used a flavor of AI called Natural Language Generation to teach computers how to use words to summarize data, as head of this new science, my number one problem was always the implementation process.

In other words:

● How do we explain what we do?

● How do we communicate how we’re going to do it?

● How do we agree upon what the goals should be?

● How do we make sure that the result is actually what the customer wants?

These four questions are at the root of all software implementations, but right now they’re really a quagmire for AI.

Let me explain a little bit about how our NLG technology worked so you can get a sense of the depth of the problem, because if you’re trying to figure out an AI strategy, this is probably your problem too.

Customer comes to us with millions of rows of data, each of those rows needs to be explained with words. This means millions of rows of data needs to turn into millions of narratives, and each of those narratives needs to be useful, in a revenue-generating way.

Got it? Awesome. Now here’s the problem.

There’s no good way to answer the questions I outlined above without actually doing the work, which takes a lot of time and money. Nor is there a decent way of showing progress, checking quality, or making adjustments, let alone doing smart, efficient things like iterative development.

As customers realized what was possible with NLG , they just wanted “robots writing articles.” As for the strategy — the why and how that would happen — that was usually an afterthought.

Same thing with AI. As customers realize what artificial intelligence and all its flavors can do, they just want the machines to be able to do those things, spending precious little time on the why and how.

That makes AI extremely risky and costly, and in too many cases leads to implementation failure.

So, like any good software development practitioner, I started my AI implementation strategy process with an outline, the simplest and quickest way to document the process from data in to decisions out — which is AI in a nutshell.

These documents were reserved for the people building the tech, not the people using the tech — i.e. our customers. For customers, we would need to distill these documents into simpler and simpler extrapolations, each distillation process more time-consuming than the last.

Eventually, we just gave up, and started manually producing what we thought we would get out of the finished product, the equivalent or wire-framing a front end to describe a back end.

The results, as you would expect, were mixed. Furthermore, the documents quickly became inflexible. Too many words to describe too few facts. Too much information lost in translation back to the customer.

But there were common threads, and these threads kept me up at night. So I kept plugging away with each new implementation, creating documentation that sometimes only I would use, and testing it in the real-world with every implementation we did.

The Matrix

Then, for our biggest project to date, the one that would eventually lead to our acquisition, I had an epiphany.

My first job out of college was as a software architect for a company that used multidimensional database technology. This sounds more complicated than it is, but the core of the technology was to abstract data in more than the three dimensions — row, column, table — we use to describe data today.

It’s like imaginary numbers. You can’t visualize it, but once you get beyond the need to visualize it, it all kind of falls into place.

That multi-dimensionality is the key for AI implementation strategy. Once I realized that we could indeed manually visually represent the final product, I realized that what AI implementation is after isn’t the end-result, it’s the meaning of the middle.

Then everything clicked into place. Traditional data software projects input new data, then run known calculations, then produce expected results, ideally with zero error. AI software projects usually input old data AND expected results, and then try to figure out what the hell the calculations were to get from one to the other, then apply those calculations to new data, then measure the error.

You know. Machine learning.

So I started experimenting with abstracting the implementation process within a matrix, removing the start-here-then-do-this aspect of turning data into decisions, but keeping the process in place of starting with data and ending with decisions.

This matrix is grounded by axes. Each axis ties into the other, so the data has a direct relationship to the algorithms, which have a direct relationship to the outcomes, and so on, resulting in… I hate to call it a cube because it’s multidimensional, but think of it as a cube of information.

The matrix sets us up to work and explains what we’re going to do. It also describes how we’re going to do it and what we expect to see at the end.

Once I got through a few of these, people on all sides of our AI projects started agreeing. Customers started agreeing on the goals: “Yes, this is what we want.” Developers started agreeing on the methods: “Yes, we can do this.” Changes became about rewriting an axis, not adding or removing a requirement and translating that back to code and algorithms. Everyone got smarter and more collaborative about the implementation process.

Documents become tools when they stop being linear descriptions of a process from its beginning to its end. The matrix does just that, evolving AI implementation strategy by taking the focus off of the end result and providing a map to determine the best way to get there.

When we have the tools and can train the technicians to use them, we greatly reduce the risk of failure in AI implementations.