The four options for AI in the cloud

There’s an AI race on among the big cloud providers — Amazon Web Services, Google, Microsoft and IBM. Each of the major players are rapidly releasing new services.

These offerings are rapidly evolving, and they differ in their levels of convenience and openness. Let’s dig in and take a look at the four different levels at which you can engage with a cloud provider for AI applications.

Compute and storage only

Services like AWS’ EC2 and Google Compute Engine provide elastic on-demand compute resource. They have pricing bands that cater for both one-time computations and permanently running applications. In the machine learning world, this is particularly handy: training models can be processor intensive, but infrequent, whereas scoring models in production requires less resource, but are always-on. Every cloud provider now offers the ability to use GPUs in the cloud, essential for speeding deep learning.

Whether you access the compute resource directly as virtual machines, or use a container hosting service, the architecture and code are completely up to you.

Hosted data platform

The algorithmic parts of an AI application aren’t the whole story: the movement and processing of data form a large and important component. Instead worrying about operating your own clusters, cloud hosted data platforms such as Amazon EMR or Google Cloud Dataproc can take the load. Though these services protect you from lock-in by using industry standard APIs, your system becomes less portable in that you’ve elected to choose a particular vendor’s platform.

Machine learning services

With machine learning services, you upload the data and use the provider’s APIs to train and operate your models. These services are continuously improved and optimized without action on your part, and provide favorable pricing options versus the cost of creating and running the services yourself. You are, of course, limited to the types of models provided. As the cloud providers make revenue from scoring the models, you generally have no option to export the trained models and take them elsewhere. (I believe IBM allows exporting of models in PMML.)

The only vendor to offer a deep learning variant on machine learning services is Google, whose Cloud Machine Learning Engine runs TensorFlow models. In this, it’s more similar to hosted data platform services, and more open.

Cognitive API services

Training deep learning models is hard. Many advanced models are the subject of journal publications. It requires a lot of data, and deep expertise to tune the networks. So cloud providers offer up pre-trained models as APIs for functions such as speech recognition, image labeling, or language translation.

I’ve labeled these services as “cognitive” because they tend to be based around the recognition capabilities of deep learning. Examples include Amazon’s Rekognition, Google’s Cloud Vision API, Microsoft’s Bing Speech API, or IBM’s Watson Language Translator.

These cognitive services are the least stable and fastest evolving of all cloud AI offerings. You can see that some of the providers are deciding if they want to bring their brands like Bing or Watson into play, and how that intersects with their regular cloud platform. I recommend extensive research before using them as part of production services.

Trade-offs

As you assemble your application, you’ll inevitably make trade-offs between the amount of work you want to do yourself, and how much you’re willing to trust the cloud provider. Generally speaking, the more sophisticated a service you’re using, the harder it is to replace it with another one.

The trade-offs of cloud AI services

One thing all AI applications have in common is their thirst for data, and where you choose to store your data will probably be the largest force in your choice of cloud providers. The more advanced machine learning services naturally assume you use the cloud providers’ own data stores. Unless you have a green field opportunity, you have probably already elected to use one or more of the cloud providers anyway.

The story for AI in the cloud is a strong one: providers are now starting to compete on the whole data science experience, providing fast querying of big data warehouses, and analyst-facing features such as notebooks and visualization tools.


If you liked this and want to hear more, please join me on my Tech in Five Minutes newsletter. It’s an ongoing series of technology explanations in clear language without the hype. Look forward to seeing you there.