Getting started with Machine Learning on GCP — Part 3: Making predictions

Simon Lind
Predictly on Tech

--

This is the third article in a series. You can find the first article here

This is the third and last part of my article series on how to get started with Machine Learning on GCP.
So far we have taken data from a source system and uploaded into Google BigQuery. In the second article, we used the interactive notebook environment on GCP to explore our data and identify some issues. Finally, we used Google Dataflow to address those issues in a data stream pipeline and load them into a datastore with clean and quality controlled data.

For the last part we will focus on building a model, and demonstrating how one can use BigQuery Transform clauses to reuse code for feature engineering, model training and making predictions on new data.

Feature Engineering and Selection

Feature engineering is a process where a data scientist uses his or hers domain knowledge to create new data from existing data. This new data can often be used by to enhance the performance of machine learning models.

A different approach is embedded feature engineering, where this process is also automated; either by the model it self during training, or by some other pre-process. This approach allows one to get started quickly which is our focus for this article.
Google BigQuery has an auto ML offering called BigQuery ML and one Transform clauses; these transform the input data and creates more features for the model during training. The transformations are then saved in the model, so the input data does not have to contain all the new features when evaluating new data.

For feature selection we will also use an embedded method. We will train a tree model using XGBoost. XGBoost also include some embedded feature selection during training using L1 and/or L2-regularization to penalize features which do not contribute to a better prediction. It also helps with reducing the models variance, lowering the risk for over fitting.

The amount of regularization applied is controlled by the L1_REG and L2_REG parameters in the CREATE MODEL statement.

Using these two methods we can automate a lot of the process of modelling.

BigQuery ML

BigQuery ML is a service on GCP which allows you to create and use Machine Learning models using standard SQL. It automates many parts of the training process such as hyper parameter tugging and in some cases even data pre processing, saving tremendous amounts of time in the “getting started” phase.

BigQuery ML offers solutions for a range of problems such as time series predictions, regression, classification, clustering and even product recommendations. Whatever your case might be, chances are you can get started fairly quickly and easily with machine learning.
BigQuery is available in the console and ready to use straight away, just navigate to BigQuery in the console and we can build our first model.

To Build a model one simply use the CREATE MODEL statement:

The OPTIONS clause gives us the opportunity to tune some of the parameters for the model. For now lets focus on the DATA_SPLIT_COL, DATA_SPLIT_METHOD and INPUT_LABEL_COLS.

Split options tells the pre-processing part of the modelling which column to use when splitting the data in a training and test dataset. The training dataset is used for the actual training of the model, and the test set is then used to evaluate the model.

Since our case is has some “time” embedded into it, it is important not to “cheat” and let the model get a glimpse in the future. To resolve this we specify the split method, which in our case should be sequential. If it would be random which is often the preferred method of splitting data, the training data could contain datapoints which occurred after some of the data in the testset, giving the model an unfair advantage.

The INPUT_LABEL_COL specify which column we want to predict. In this case we want to predict the close price 5 minutes into the future.

For more information on the options, refer to the official docs.

Now just hit “Run” and our model will be trained. Once finished, we must evaluate the performance of our model. This is done by using the EVALUATE clause.

This query returns a set of performance metrics by testing the model on the testing fraction of the data we specified in the training query. You can however specify another set of data for evaluating the model. Read more here.

From here, you can add features and tune the options and see if it improves the results of the model. Interpretation of these values will vary from case to case, and i will not cover that here.

Lets continue and add the TRANSFORM clause for some simple feature engineering when training our model. There are several functions available in the transform clause, you can find all here.
We are going to do a simple expansion of our feature-set using polynomial expansion.

Polynomial expansion takes each polynomial combination of the input features and outputs a set of all of those combinations. This transformation is saved in the new model, saving us the trouble of having to rewrite it for prediction on new data. We can now use the EVALUATE clause again to see if our model performance is any better, relative to the last model.

To make predictions on new data, simply use the PREDICT clause, all you have to make sure of is that the query which contains the data, contains the same fields as the input for the model training.

A query to run a prediction on the last datapoint in our dataset.

And voilà, we have our first prediction of the “future”.

The predicted price 5 minutes after the input datetime, based on the input data.

Since our model now contains all necessary data transformations, the input datasets in the SELECT clause,only needs to contain the same data as the input data for the model training, making it very easy to make new predictions. As we in the future populate the btc_processed_data table with new data, it will be a breeze to make predictions on the future price based on the new data.

In a future article, I will show how you can handle change data capture to make sure your datasets is always up to date, allowing you to make real predictions of the future.

Summary and Disclaimer

In this series of articles, the focus has been on how you can use the services and tools offered on GCP to get started quickly with machine learning. We have covered data ingestion, data transformation, and exploration, feature engineering and selection, and lastly modelling and predicting. The aim was never to create a good model for price prediction. XGBoost models are not very good at extrapolating data giving it a hard time giving predictions of values with ranges outside of the training set.

If we were to use the models we have created in this series, we would most likely have a bad time. Please refrain from doing so.

I hope that I have shown the values of the offerings on GCP when it comes to working with data and machine learning, and perhaps made it easier for you to get started with ML on GCP.

--

--

Simon Lind
Predictly on Tech

Master of Science in Biotechnology Engineering with focus Bioinformatics. Cloud + ML + Data + Python + Java.