AI Development and Integration Using Google Cloud

Maxell Milay
Google Cloud - Community
8 min readJul 25, 2024

Developing with AI (Artificial Intelligence) can be really tedious and straining, particularly in how it affects the machine used in development, along with setting up the development pipeline. Building and integrating AI into software needs some sort of setup, specifically in creating machine learning models from scratch. It usually involves making sure the GPU (Graphics Processing Unit) is capable enough to train models at a reasonable rate, or else slow training would hinder the iterative process of ML (Machine Learning) Development.

With the rise of the Google Cloud in software development, it may as well be used in delivering AI solutions. Google Cloud is one of the many cloud providers, where they basically provide a renting service in computing power. In its core, what cloud providers do is rent out servers (which are basically computers) to people so that they won’t experience the hassle of setting up a dedicate hardware system, but of course its much more complicated than that. Google Cloud provides many layers of abstraction on top of their computer renting service, which individually has their own purpose. This corresponds to the different APIs that can be used as a service from Google Cloud, and recently, they have provided a set of services tailored towards AI development.

API Abstraction Layers in Google Cloud

Google Cloud has its suite of AI Development services, which is Vertex AI. Vertex AI is a set of services that correspond to certain aspects in creating AI solutions. However, Google Cloud also has services pertaining to AI Development. In summary, the majors services include:

  1. Pre-trained APIs
  2. BigQuery ML
  3. AutoML
  4. Custom Training

Both AutoML and Custom Training are within Vertex AI, but the rest are generally not. Pre-trained APIs are ready-to-use services, where it only needs to be called as an API request. Some of the Pre-trained APIs, specifically those under generative AI, are within the umbrella of Vertex AI. BigQuery is then primarily a data warehouse used to store data in preparation for ML Development, but it has evolved through the years to cater development itself.

Pre-trained APIs

From its name, pre-trained APIs are ML models that Google Cloud already built and trained. In short, you can integrate them directly in you application without touching anything related to model development. These APIs are incredibly useful in rapid prototyping where time is critical in creating a proof of concept or a Minimum Viable Product (MVP). With just an API call away, software developers can integrate a range of AI solutions into their software. The pre-trained APIs used in AI development are primarily categorized into the following: Speech, Text, and Language APIs, Image and Video APIs, Document and Data APIs, Conversations APIs, and Generative APIs.

Speech, Text, and Language APIs

These APIs are used to derive insights from unstructured text using Google machine learning. It’s capabilities range from Entity Analysis, Sentiment Analysis, Language Syntax, Category Analysis, and more. Entity Analysis identifies and classifies the subjects in the text, and can generate summaries based on key entities. Sentiment Analysis can identify emotions indicated in the text (positive, negative, neutral), and is particularly useful in assessing customer feedback. Language Syntax analyzes syntax and extracts linguistic information. Category Analysis assists in categorizing texts within documents.

Image and Video APIs

The services under this category primarily includes, but is not limited to, the Vision API and the Video Intelligence API. The Vision API is best for a quick and easy integration of basic vision features such as image labeling, face and landmark detection, Optical Character Recognition (OCR), and others. The Video Intelligence API is best for analyzing video content, content moderation and recommendation, and others that are based on the features related to object detection and tracking, scene understanding, activity recognition, face detection and analysis, text detection and recognition

Document and Data APIs

These are services that involve analyzing and manipulating documents using AI. The services includes but are not limited to the Document AI API and the Document Warehouse API. The Document AI is mainly used for creating document processors that help automate tedious tasks, improve data extraction, and gain deeper insights from unstructured or structured document information. The Document Warehouse API is designed to streamline the handling, processing, and analysis of large-scale document collections, enhancing productivity and enabling better decision-making through advanced data extraction and management capabilities.

Conversational AI APIs

A major service within the conversational AI category is the Dialogflow API, which is a versatile tool for creating intelligent, conversational interfaces that can enhance user engagement and automate customer service interactions. It is primarily used for building conversational interfaces and applications, such as chat bots and voice-powered assistants. These chat bots can then be deployed on various platforms, including websites, mobile apps, and messaging platforms like Facebook Messenger, Slack, and more.

Generative AI APIs

The set of pre-trained APIs under generative AI is the most diverse and latest. As new generative AI models are released, more of these APIs are deployed catering to specific features. These include primarily of Gemini multimodal API, Embeddings API, Imagen API, Codey API, and others.

Gemini Multimodal processes and generates data across multiple modalities, including text, images, and videos.

The Multimodal Embeddings API generates numerical vectors (embeddings) from a combination of image, text, and video inputs. These embeddings capture the semantic meaning and context of the data, enabling various advanced applications such as image and video classification, semantic search, and content moderation.

Imagen on Vertex AI equips developers with Google’s cutting-edge image generation AI, enabling the creation of high-quality visual assets from text prompts in seconds.

The Codey APIs offer a suite of tools for working with code. These include the Code Generation API, which creates code from natural language descriptions using the code-bison model; Code Chat API, which powers chat bots for coding assistance and debugging via the codechat-bison model; and the Code Completion API, which provides auto-completion suggestions to enhance coding efficiency with the code-gecko model.

BigQuery ML

For data engineers or scientists out there, BigQuery might be more known as a platform for data warehousing. A data warehouse in simple terms is like a giant database that collects and stores information from various parts of a company so it can be analyzed and used to generate reports.

However, Google extended its capabilities and added BigQuery ML, which can be thought of as a set of SQL extensions to support machine learning. It allows users to use SQL to invoke machine learning models on structured data, and also can provide decision-making guidance through predictive analytics. Through BigQuery ML, users don’t need to export data out of BigQuery to create and train a model.

First, you need to prepare and upload your dataset in BigQuery. For this demo, we will be using the NYC Taxi and Limousine Commission Yellow Taxi Trip dataset. We can train and create a linear regression model using the following query below.

CREATE OR REPLACE MODEL
mydataset.model_linreg
OPTIONS(
input_label_cols=['fare_amount'],
model_type='linear_reg'
) AS
SELECT
fare_amount,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count
FROM
`nyc-tlc.yellow.trips`

BigQuery ML supports different models involving but are not limited to classification and regression. After training, you can evaluate and inspect evaluation metrics using the following query below.

SELECT
*
FROM
ML.EVALUATE(
MODEL mydataset.model_linreg
)

In order to use and predict using the model that you just created, you can use the following query below.

SELECT
*
FROM
ML.PREDICT(
MODEL mydataset.model_linreg,
(SELECT
fare_amount,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
passenger_count
FROM
`nyc-tlc.yellow.trips`
)
)

AutoML

Google Cloud’s AutoML allows users with little to no experience in machine learning to train models only by preparing training data. Automated machine learning (AutoML) automates the end-to-end process of building machine learning models, including data preprocessing, feature engineering, model selection, and hyperparameter tuning. Its goal is to simplify model development for non-experts by offering a user-friendly interface, thereby democratizing machine learning and making it accessible to a broader audience, including those with limited data science experience.

First, the user must collect and prepare data, then import into a dataset to train a model.

Importing dataset and selecting training method

After setting up the dataset, you can adjust some minor details in the model details section such as the data split and the encryption.

Setting up additional model training details

After that, you can now start training!

Custom Training

Finally, you can perform custom training using Vertex AI. In this way, the machine learning engineer has the utmost freedom in the configurations for training. However, the engineer has to build a model from scratch, and Google Cloud offers two major platforms for that: Vertex AI Workbench and Colab Enterprise. Vertex AI Workbench is just like Jupyter notebook, and it is deployed in a single development environment supporting the data science workflow.

Vertex AI user-managed Workbench notebook

Google Colab Enterprise on the other hand is a managed notebook service within Google Cloud designed to provide secure, scalable, and collaborative environments for data science, machine learning, and AI workflows. It extends the capabilities of the popular Google Colab service by incorporating enterprise-grade security, compliance, and integration with other GCP services. Recently, Gemini has been integrated within Colab that takes in the notebook as context to better cater users.

Google Colab Enterprise notebook

Summary

In summary, there are four main ways to develop and integrate AI in software using Google Cloud. If you have little ML expertise or no intention to train a model, use Pre-trained APIs. If you are or want to be familiar with SQL, and already have data in BigQuery, use BigQuery ML. If you want to build custom models with your own training data with minimal coding, use AutoML. If you then want to build an ML model from scratch to have full control of the ML workflow, use Vertex AI Workbench or Colab Enterprise for custom training.

Summary of AI development and integration using Google Cloud

Moving Forward

ML model development is just a part of the whole ML workflow. If you want to use Google Cloud in fully deploying a model, you need to properly engineer your data first before fitting it with the model. Google offers many services related to data preprocessing, including Feature Store, which centralizes features for ML use within organizations. After creating a custom model, you would then need to find a way to continuously integrate and train it. This can be done using Machine Learning Operations (MLOps), where ML training and operation pipelines would be established. The whole ML workflow is an iterative process, and using Google Cloud can take off unnecessary blockers in the process. By automating parts of the pipeline, engineers and developers can focus on more important and relevant parts in developing and integrating end-to-end AI solutions.

References

Google Cloud Training. (n.d.). Introduction to AI and Machine Learning on Google Cloud [MOOC]. Coursera. https://www.coursera.org/learn/introduction-to-ai-and-machine-learning-on-google-cloud

--

--