The hardest decision in the process of building a machine learning model is deciding on which algorithm to use.
The algorithm should be accurate and also optimized at the same time. What if there was a tool that helped you decide the algorithm based on the data you uploaded? You’re in luck.
AutoAI in Watson Studio
AutoAI in Watson Studio is a graphical tool that automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem. These model pipelines are created over time as AutoAI algorithms learn more about your dataset and discover data transformations, estimator algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective.
The AutoAI process
Using Auto AI, you can build and deploy a machine learning model with sophisticated training features and no coding. The tool does most of the work for you.
Tutorial: Building, deploying, testing, monitoring, and retraining a model
I’m excited to present a tutorial with step-by-step instructions that walk you through the end-to-end process of the following:
- Building a predictive machine learning model using AutoAI.
- Deploying it as an API to be used in applications.
- Testing the model.
- Monitoring the model using Watson OpenScale.
- Retraining the model with feedback data.
All of this happening in an integrated and unified self-service experience on IBM Cloud.
In this tutorial, the Iris flower data set is used for creating a machine learning model to classify species of flowers.
Following the steps in the tutorial, you will create a Watson Studio project, associate a Watson Machine Learning (WML) service to the project and add an AutoAI experiment. The AutoAI experiment will then use the IRIS dataset to recommend multiple pipelines. The pipeline with the best accuracy will be saved as an ML model and then deployed.
Once the AutoAI completes running the experiment, the output looks something similar to this:
The next step is to Monitor Your Deployed Machine Learning Model with OpenScale