Machine learning architecture on Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure

cloud platforms GCP, AWS, Azure

The cloud platforms themselves have various services which can be mixed and matched to satisfy the need of any business case with the allocated budget. Here I am going to pick a generic example mentioned here and discuss the architecture on above-mentioned cloud platforms.

Since the architectures are serverless, there are a bunch of functions which makes sure things are moving forward between the services.


The business has an incoming stream of support tickets. Support agent is receiving minimum information from the customer hence they are spending more time trying to understand what the customer is asking.

Before an agent…

Getting predictions on the data from Rest API with the model hosted using TensorFlow Serving.

So, you have a beautiful model which works like a charm on the data. Now, you want to put that model in production and get the prediction on new data.

Let me introduce to you TensorFlow Serving a system designed to serve trained model in production. By default, it comes with seamless integration with TensorFlow models, but the capabilities can be extended to other models as well.

In the “frozenlake-nonslippery” environment using cross-entropy method to get started with reinforcement learning.

The learning of optimal decision over time by an agent in an environment is generally how reinforcement learning is defined.

At a high level there are several methods in reinforcement learning, classified and explained in an oversimplified manner as follows:

1. Model-free or model-based:
a. Model-free: Brute force method where agent act first and think later.
b. Model-based: Agent makes a prediction based on historical data, takes an action based on prediction.

2. Value-based or policy-based:
a. Value-based: Discounted total reward agent can get from the given state of an environment.

Using Fifa 18 game data, classifying a football player as a CM, ST, CB and GK.

Source: Freepik

Thanks to ‘Aman Shrivastava’ for sharing the data.

We will be using the provided dataset which is processed and ready for the purposes.

Use case: As a user, I have statistics of a player and I want to know whether the player should play as a Striker (ST), Center Midfielder (CM), Center Back (CB) or a Goalkeeper (GK).

The data I’ve used are the scores of attributes of all players in the FIFA 2018 game. The dimensions of the data are 14045 x 74 but I am using the data of the specialist players with the selected preferred position as…

Level: Beginner

Python understanding: Intermediate

Knowledge of data science: Intermediate

Objective: Develop an intuition of multi-dimensional dataset.

Goal: A trained model which does Image Classification

IDE: To get started I will recommend using Jupyter notebook on Google Collaboration.

Regarded as the hello world of Deep Learning, this dataset exposes inspiring data scientists to the complexity which exist in the real world.

The intention is to share how I have learned to understand the world of multi-dimensional arrays.

Loading data and examining the data

Let’s load the data

So, here is the challenge when we look at the dataset, it…

Aizaz Ali

A data scientist more leaned towards reinforcement learning.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store