How Google Cloud AI can help machine learning

Sajjad Hussain
Nov 18 · 21 min read
Image for post
Image for post

Research shows that although Google Cloud AI (Google Cloud Artificial Intelligence) and machine learning platforms lack some features and are still in the testing phase, their scope and quality are still second to none in the industry.

Google has one of the largest machine learning stacks in the industry, currently centered on its Google Cloud AI and machine learning platform. Google has open sourced TensorFlow several years ago, but TensorFlow is still the most mature and widely cited deep learning framework. Similarly, Google divested Kubernetes into open source software a few years ago, but it is still the main container management system.

Google Cloud Platform is one of the best sources of tools and infrastructure for developers, data scientists, and machine learning experts, but historically, for business analysts who lack a serious background in data science or programming, Google Cloud AI’s The attraction is not great. And this situation is now beginning to change.

Google Cloud AI and machine learning platforms include artificial intelligence building blocks, artificial intelligence platforms and accelerators, and artificial intelligence solutions. These are fairly new artificial intelligence solutions for business executives rather than data scientists, which may include consultations from Google or its partners.

Pre-trained but customizable artificial intelligence building blocks can be used without familiarity with programming or data science. Nonetheless, data scientists often use them for practical reasons, and in essence, they can get their work done without extensive model training.

Artificial intelligence platforms and accelerators are usually geared towards data scientists and require coding skills, technical knowledge of data preparation and a lot of training time. Therefore, it is recommended to implement it after trying the relevant building blocks.

Some links are still missing in Google Cloud AI products, especially in data preparation. The closest thing to Google Cloud and data import and adjustment services is Trifacta’s third-party Cloud Dataprep. However, the functional engineering built into Cloud AutoML Tables is promising, and it will be useful to use this service for other situations.

The dark side of artificial intelligence has to do with responsibility (or lack of morality) and persistent model bias (usually due to biased data used for training). Google released the principles of artificial intelligence in 2018.

Google has many competitors in the artificial intelligence market, and there are many competitors in the public cloud market (more than six cloud computing providers). In order to make a fair comparison and summarize: AWS cloud platform can do most of the work of Google cloud platform, and it is also very good, but usually charges a higher price.

The artificial intelligence building blocks of Google Cloud do not require much machine learning expertise, but need to be based on pre-trained models and automatic training. The artificial intelligence platform allows users to train and deploy their own machine learning and deep learning models.

Image for post
Image for post

Google Cloud AI building blocks

Google Cloud AI building blocks are easy-to-use components that users can incorporate into their own applications to add vision, language, dialogue, and structured data. Many artificial intelligence components are pre-trained neural networks, but if they cannot meet the needs of users, they can be customized using transfer learning and neural network search. AutoML Tables is different because it can use data scientists to automate the process of finding the best machine learning model for tabular data sets.

AutoML

The Google Cloud AutoML service provides customized deep neural networks for language translation, text classification, object detection, image classification, and video object classification and tracking. They require labeled data for training, but do not require in-depth learning, transfer learning, or important knowledge of programming.

Google Cloud AutoML can customize the high-precision deep neural network tested by Google for the user’s marked data. AutoML trains models from data instead of starting from scratch. AutoML implements automatic deep transfer learning (meaning starting from existing deep neural networks based on other data) and neural structure search for language pair translation and other services listed above (Meaning that the correct combination of additional network layers was found).

In each field, Google already has one or more pre-training services based on deep neural networks and large amounts of label data. These methods are likely to be applicable to unmodified data, and users should test this to save time and cost. If they fail to do so, Google Cloud AutoML can help users create a model that can do it without requiring users to know how to perform transfer learning or how to design neural networks.

Compared with training a neural network from scratch, transfer learning has two main advantages: First, it requires much less training data because most layers of the network have been well trained. Second, it trains faster because it only optimizes the last layer.

Although Google Cloud AutoML services were usually packaged and provided in the past, their basic pre-training services are now listed. What most other companies call AutoML is implemented by Google Cloud AutoML Tables.

Image for post
Image for post

To test the AutoML Vision custom flower classifier, it took an hour to train the classifier from Google sample images, and took photos of tulips in a nearby art museum for comparison.

AutoML Tables

For many regression and classification problems, the usual data science process is to create data tables for training, clean and organize the data, perform feature engineering, and try to train all appropriate models on the converted tables, including the best steps for optimization Hyperparameters of the model. After manually identifying the target field, Google Cloud AutoML Tables can automate the entire process.

AutoML Tables will automatically search for structured data in Google’s model-zoo to find the most suitable model, from linear/logistic regression models (for simpler data sets) to advanced depth, integration and architecture search methods (using (For larger and more complex models). It can automatically perform feature engineering on various table data primitives (such as numbers, classes, strings, timestamps, and lists), and help users detect and handle missing values, outliers, and other common data problems.

Its code-free interface guides users through the entire end-to-end machine learning life cycle, so that anyone on the team can easily build models and reliably integrate them into a wider range of applications. AutoMLTables provides a wide range of interpretability functions for input data and model behavior, and is used to prevent common errors. AutoMLTables can also be used in API and notebook environments.

AutoML Tables compete with several other AutoML implementations and frameworks.

Image for post
Image for post

From functional design to deployment, AutoML Tables automates the entire process of creating predictive models for tabular data.

Image for post
Image for post

In the analysis phase of AutoML Tables, descriptive statistical information of all original functions can be seen.

The free Google Cloud Vision “Try API” interface allows you to drag images onto a web page and view the results. You can see that the child is smiling, so the “Joy” label is correct. But the algorithm cannot fully recognize the hat worn.

Vision API

Google Cloud Vision API is a pre-trained machine learning service used to classify images and extract various functions. It can classify images into thousands of pre-trained categories, from general objects and animals found in images (such as cats) to general conditions (such as dusk) to specific landmarks (Eiffel Tower and Grand Canyon), And determine the general attributes of the image, such as its dominant color. It can isolate the face area, and then perform geometric analysis (face orientation and landmarks) and sentiment analysis on the face, although it will not recognize a person’s face as a specific person, except for celebrities (requires special permission for use). Vision API uses OCR to detect text in images in more than 50 languages ​​and various file types. It can also recognize product logos and detect adult, violent, and medical content.

Video Intelligence API

Google Cloud’s Video Intelligence API will automatically identify more than 20,000 objects, locations and actions in stored and streaming videos. It can also distinguish scene changes and extract rich metadata at the video, snapshot, or frame level. It also uses OCR to perform text detection and extraction, detect explicit content, automatically turn off subtitles and descriptions, recognize logos, and detect faces, people, and gestures.

Google recommends using the Video Intelligence API to extract metadata to index, organize, and search users’ video content. It can record videos and generate closed captions, as well as mark and filter inappropriate content, all of which are more cost-effective than manual recording. Use cases include content review, content recommendation, media archiving, and advertising.

Natural Language API

Natural language processing (NLP) is an important part of its “secret formula”, which can make input to Google Search and Google Assistant very effective. Natural Language API exposes the same technology to user programs. It can perform syntax analysis, entity extraction, sentiment analysis, and content classification in 10 languages. If the user knows a certain language, he can specify it. Otherwise, the API will try to automatically detect the language. Currently, a separate API can be provided in advance upon request, specifically dealing with healthcare-related content.

Translation API

The Translation API can translate more than one hundred languages. If the user does not specify it, it can automatically detect the source language and provide three versions: basic translation, advanced translation, and media translation. The advanced translation API supports the use of glossaries, batch translation and custom models. The basic translation API is essentially the API used by the consumer Google translation interface. And AutoML Translation allows users to use transfer learning to train custom models.

Media Translation API directly converts content from audio files or streaming files in 12 languages, and automatically generates punctuation marks. There are different models for video and phone call audio.

Image for post
Image for post

Text-to-Speech

The Text-to-Speech (text-to-speech) API can convert plain text and SSML markup into sounds, and you can select more than 200 sounds and 40 languages ​​and variants. Its variants include different national and national accents, such as the languages ​​of the United States, Britain, South Africa, India, Ireland, and Australia.

The basic sound usually sounds very mechanical. WaveNet sounds usually sound more natural, but it costs more to use. Users can also create custom sounds from their own studio-quality recordings.

The user can increase or decrease the speed of the synthesized sound by 4 times, and increase or decrease the pitch by 20 semitones. SSML tags allow users to add pauses, numbers, date and time formats, and other pronunciation instructions. You can also increase the volume gain by up to 16 decibels, or reduce the volume by up to 96 decibels.

Speech-to-Text

The Speech-to-Text (Voice-to-Text) API uses Google’s advanced deep learning neural network algorithm to convert speech to text for automatic speech recognition (ASR). It supports more than 125 languages ​​and variants, and can be deployed locally (with license) and in Google Cloud Platform. Speech-to-Text can run synchronously for shorter audio samples (one minute or less), asynchronously process longer audio (up to 480 minutes), and can be streamed for real-time recognition.

Users can customize speech recognition by providing prompts to transcribe domain-specific terms and rare words. There are dedicated ASR models for video, telephone, command and search, and “default” (anything else). Although users can embed encoded audio in API requests, in more cases, users will provide URIs for binary audio files stored in Google cloud storage buckets.

Dialogflow

Dialogflow Essentials is based on “Speech-to-Text” and “Text-to-Speech”, and can use more than 40 pre-built agents as templates for a single Small robots with topical dialogues. Dialogflow CX is an advanced development kit for creating conversational artificial intelligence applications, including chat robots, voice robots and IVR (Interactive Voice Response) robot programs. It includes a visual robot construction platform (see screenshot below), collaboration and version control tools, and advanced IVR function support, and is optimized for enterprise size and complexity.

Image for post
Image for post

Dialogflow CX is a designer for complex voice interactive virtual agents. The designer lists ten phrases with the intention of “store.location” here. Similar phrases will also be recognized.

Inference API

Time series data usually requires some special processing, especially if users want to perform real-time data processing on streaming data in addition to processing large historical data sets. The fully managed serverless Inference API is currently in limited Alpha testing. Event time stamps can be used to detect trends and anomalies, process data sets containing up to tens of billions of events, and can run thousands of queries per second. Delay in responding.

Recommendations API

Using machine learning to build an effective recommendation system is considered a tricky and time-consuming problem. Google has used the recommendation API to automate this process, and it is still in the testing phase. This fully managed service is responsible for preprocessing user data, training and adjusting machine learning models, and providing infrastructure. It also corrects bias and seasonality. It integrates related Google services such as Analytics 360, Tag Manager, Merchant Center, cloud storage and BigQuery. The initial model training and adjustment can take two to five days to complete.

Google Cloud AI platform

The Google Cloud AI platform and accelerator are geared towards developers, data scientists and data engineers. In most cases, using the Google Cloud AI platform to solve problems can be a huge effort. If users can avoid this effort by using artificial intelligence building blocks, they should.

The Google Cloud AI platform facilitates end-to-end machine learning workflows for developers, data scientists, and data engineers. Although it can’t help users acquire data or code models, it can help integrate the rest of the machine learning workflow.

Image for post
Image for post

The Google Cloud AI platform links most machine learning workflows together, from model training to model version control and management.

The artificial intelligence platform includes several model training services and training and adjustment of various machine types, including GPU and TPU accelerators. The prediction service allows users to provide predictions from any trained model; it is not limited to models trained by users themselves or models trained by users on Google Cloud Platform.

AI Platform Notebooks implements JupyterLab Notebooks on the virtual machine of Google Cloud Platform, and is pre-configured with TensorFlow, PyTorch and other deep learning software packages. The artificial intelligence platform data labeling service allows users to request artificial labels for the data set to be used to train the model. The artificial intelligence platform deep learning virtual machine image is optimized for data science and machine learning tasks for key machine learning frameworks and tools and GPU support.

AI Platform Notebooks

For many data scientists, using Jupyter or JupyterLab Notebook is one of the easiest ways to develop and share models and machine learning workflows. AI Platform Notebooks make it easier to create and manage secure virtual machines pre-configured through JupyterLab, Git, GCP integration, and user-selected Python 2 or Python 3, R, Python or R core packages, TensorFlow, PyTorch, and CUDA.

Although Kaggle and Colab also support Jupyter Notebooks, Kaggle is for hobbyists and learning professionals, Colab is for researchers and students, and AI Platform Notebooks is for business users. For heavy work, AI Platform Notebooks can use deep learning virtual machines, Dataproc clusters and Dataflow, and can connect to GCP data sources, such as BigQuery.

Users can start development with small virtual machines, and then expand to more powerful virtual machines with more memory and CPU, and may use GPU or TPU for deep learning training. Users can also save Notebooks in the Git repository and load them into other instances. Or you can use the artificial intelligence platform training service discussed below.

The following is an implementation code laboratory using AI Notebooks. The following is a screenshot of the experiment. A directory inside has sample notebooks preloaded into JupyterLab. They look very interesting.

Image for post
Image for post

When creating a new Google Cloud AI Notebook instance, you can choose the starting point of the environment. You can optimize the virtual machine later.

Image for post
Image for post

At the beginning of the code laboratory, a package import is set up, and queries are run against the public BigQuery dataset to obtain data for analysis and model training. The code lab freely mixes the methods of Pandas, TensorFlow, NumPy and Scikit-learn. Witwidget is a Google hypothetical tool.

Image for post
Image for post

After importing the data, the code lab will split it for testing and training, and train a simple fully connected neural network. The focus of this experiment is to demonstrate the Google Cloud AI Notebook, not to train the best model, so there are only 10 cycles, and the final mean square error is not that big.

Explainable artificial intelligence and hypothetical tools

If users use TensorFlow as a framework to build and fit models, they can use Google’s what-if analysis tool to understand how changing values ​​in the training data might affect the model. In other fields it is called sensitivity research. What-if analysis tools can also display many useful graphics.

If it is suitable for the TensorFlow model, you can use the Google hypothesis tool in the Cloud AI Notebook to explore the interpretability of the model.

Artificial intelligence platform training

Compared with model development, model training usually requires more computing resources. Users can train simple models on Google Cloud AI Notebook or their own small data sets. To train complex models on large data sets, it may be better to use the AI ​​Platform Training service.

The training service runs training applications stored in Cloud Storage buckets for training and verification data stored in Cloud Storage buckets, Cloud Bigtable or other GCP storage services. If the user runs the built-in algorithm, there is no need to build their own training application.

Users can train models using cloud storage (currently TensorFlow, Scikit learn, and XGBoost) code packages, as well as models using custom container images from cloud storage and models using built-in algorithms. Users can also use pre-built PyTorch container images derived from the artificial intelligence platform deep learning container.

The current built-in algorithms are XGBoost, distributed XGBoost, linear learning, breadth and deep learning, image classification, image object detection and TabNet. Except for image classification and image object detection, all these algorithms are trained from tabular data. Currently, all algorithms except XGBoost rely on TensorFlow 1.14.

Users can run artificial intelligence platform training from the “Jobs” tab of the artificial intelligence platform console, or issue Google Cloud AI platform job submission training commands to run artificial intelligence platform training. The command-line calling method can also automatically upload the model code to the Cloud Storage bucket.

Users can use distributed XGBoost, TensorFlow and PyTorch for distributed artificial intelligence platform training. The settings for each frame are different. For TensorFlow, there are three possible allocation strategies, as well as six options of “scale level”, which define the configuration of the training cluster.

Hyperparameter adjustment works by training models with different training process variables (to set variable weights) (for example, by setting the learning rate to control the algorithm). Users can perform hyperparameter adjustments on the TensorFlow model fairly easily, because TensorFlow returns its training metrics in the summary event report. For other frameworks, users may need to use the cloud ml-hypertune Python package so that the artificial intelligence platform can train indicators that can detect the model. When defining the training job, the user can set the hyperparameters and scopes to be adjusted and adjust the search strategy.

Users can use GPU or TPU for training. Usually, users need to specify an instance type, which includes the GPU or TPU to be used, and then enable them from the code. The larger and more complex the model, the more likely the GPU or TPU to accelerate its training.

Image for post
Image for post

Google Cloud AI Platform Jobs is how users can set up model training using one of the three machine learning frameworks or custom container images. When choosing a framework, you must also choose a version.

Image for post
Image for post

Built-in algorithms are an alternative to providing machine learning frameworks and code for custom models.

AI Platform Vizier

Another way to perform hyperparameter optimization is to use the AI ​​platform Vizier (black box optimization service). Vizier has conducted many experimental studies and can solve many types of optimization problems, not just artificial intelligence training. Vizier is still in Beta testing.

AI Platform Prediction

After having a well-trained model, users need to deploy it to make predictions. AI Platform Prediction manages computing resources in the cloud platform to run user models. The user exports the model as an artifact that can be deployed to AI Platform Prediction. No need to train the model on Google Cloud AI.

AI Platform Prediction assumes that the model will change over time, so the model contains a version, and the version can be deployed. These versions can be based on completely different machine learning models, although it helps if all versions of the model use the same inputs and outputs.

AI Platform Prediction assigns nodes to process online prediction requests sent to the model version. When deploying the model version, you can customize the number and types of virtual machines that AI Platform Prediction uses for these nodes. The nodes are not completely virtual machines, but the underlying machine types are similar.

Users can allow AI Platform Prediction to scale nodes automatically or manually. If the GPU is used for the model version, the nodes cannot be automatically scaled. If the assigned computer type is too large for the model, you can try to automatically scale the node, but the CPU load condition for scaling may never be met. In an ideal situation, users will use nodes that just fit their machine learning model.

In addition to predictions, the platform can also provide artificial intelligence explanations for specific predictions in the form of feature attribution. Beta testing is currently underway. Feature attribution can be used as a bar graph of tabular data and an overlay of image data.

AI Platform Deep Learning VM Images

When users start with a normal original operating system, configure their environment for machine learning and deep learning, CUDA drivers and JupyterLab may sometimes require time to train the model, at least for simple models. Using pre-configured images can solve this problem.

Users can use TensorFlow, TensorFlow Enterprise, PyTorch, R or other six frameworks to choose the artificial intelligence platform deep learning virtual machine image. All images can include JupyterLab, and images intended to be used with the GPU can have CUDA drivers.

Image for post
Image for post

Users can create instances through the Google Cloud command line (installed via the Google Cloud SDK) or Google Cloud market. When creating a virtual machine, the user can select the number of virtual CPUs (the amount of memory also needs to be determined) and the number and types of GPUs. Users will see the estimated value of the monthly cost based on the hardware selected and receive a discount for continued use. These frameworks do not charge additional fees. If you choose a virtual machine with GPU, you need to wait a few minutes to install the CUDA driver.

Users can create deep learning virtual machines from Google Cloud Console and command line. It should be noted that both CUDA driver and JupyterLab installation only need to select a check box. The frame, GPU, machine type, and region selection is done from the drop-down list.

AI Platform Deep Learning Containers

Google also provides a deep learning container for Docker on local computers or Google Kubernetes Engine (GKE). Containers have all the frameworks, drivers, and supporting software that users may need. Unlike virtual machine images, virtual machine images only allow users to select what they need. The deep learning container is currently in beta testing.

AI Platform Pipelines

MLOps (Machine Learning Operations) applies DevOps (Developer Operations) practices to machine learning workflows. Many Google Cloud AI platforms support MLOps in some way, but the AI ​​platform pipeline is the core of MLOps.

AI Platform Pipelines, currently in beta testing, makes it easier to start using MLOps by reducing the difficulty for users to set up Kubeflow Pipelines using TensorFlow Extended (TFX). The open source Kubeflow project is dedicated to making the deployment of machine learning workflows on Kubernetes simple, portable and extensible. Kubeflow Pipelines is a component of Kubeflow and is currently in beta testing. It is a comprehensive solution for deploying and managing end-to-end machine learning workflows.

Image for post
Image for post

When Spotify switched its MLOps to Kubeflow Pipelines and TFX, some teams increased the number of experiments 7 times.

TensorFlow Extended is an end-to-end platform for deploying production machine learning pipelines. TFX provides a toolkit that helps users coordinate the machine learning process on various orchestrators (such as Apache Airflow, Apache Beam, and Kubeflow Pipelines) to make the implementation of MLOps easier. Google Cloud AI Platform Pipelines uses TFX Pipelines, which is DAG (Directed Acyclic Graph), and uses Kubeflow Pipelines instead of Airflow or Beam.

Users can manage the artificial intelligence platform pipeline through the “pipe” tab of the artificial intelligence platform in the Google Cloud console. Creating a new pipeline instance will create a Kubernetes cluster, a cloud storage bucket and a Kubeflow pipeline. Then, the user can define the pipeline according to the example, or use TFX to define the pipeline from scratch.

Spotify uses TFX and Kubeflow to improve its MLOps. The company reports that some teams are conducting more than seven times more experiments.

AI Platform Data Labeling Service

The Google Cloud AI Platform data labeling service allows users to work with human labelers to generate highly accurate labels for data collections that can be used in machine learning models. The service is currently in the beta testing stage, and due to the outbreak of the new crown epidemic, its availability is very limited.

AI Hub

Google Cloud AI Hub is currently in beta testing and can provide a series of assets for developers and data scientists who build artificial intelligence systems. Users can find and share assets. Even in beta form, AI Hub seems to be very useful.

Image for post
Image for post

Google Cloud AI Hub is a fast way to learn, build and share artificial intelligence projects on Google Cloud Platform.

TensorFlow Enterprise

TensorFlow Enterprise provides users with a Google Cloud optimized release version of TensorFlow with long-term version support. The TensorFlow Enterprise distribution contains customized TensorFlow binaries and related software packages. Each version of TensorFlow Enterprise Edition distribution is based on a specific version of TensorFlow; all included packages are available in open source.

Google Cloud AI Solutions

Google has launched artificial intelligence solutions for corporate executives, not for data scientists or programmers. Solutions usually come with optional consulting or contract development components. Consulting services can also be provided separately.

Contact Center AI

Contact Center AI (CCAI) is a Google solution for contact centers designed to provide humanized interaction. It is based on Dialogflow and can provide virtual agents, monitor customer intent, switch to real-time agents when necessary, and provide assistance to human agents. Google has six partners that can help users develop and deploy CCAI solutions, and support and train your agents.

Build and Use AI

Build and Use AI is a universally defined solution that mainly provides Google’s artificial intelligence expertise, artificial intelligence building blocks and artificial intelligence platforms to solve users’ business problems. Among other benefits, this solution can also help users set up MLop through pipeline automation and CI/CD.

Document AI

Document AI uses Google Vision API OCR building blocks in conjunction with Cloud Natural Language to extract and interpret information from commercial documents that are usually provided in PDF format. Other components can parse regular forms and invoice forms. Industry-specific solutions for mortgage processing and procurement are currently being tested. Google has six partners who can help implement Document AI solutions.

Pricing of various tools

Cloud AutoML Translation: Training: USD 76 per hour; Classification: After the first 500,000 characters, you need to pay USD 80 per million characters.

Cloud AutoML Natural Language: Training: USD 3 per hour; Classification: USD 5 per thousand records after the first 30,000 records.

Cloud AutoML Vision: Training: $20 per hour after the first hour; Classification: $3 per thousand images after the first 1000 images.

Cloud AutoML Tables: Training: 6 hours free one-time use + $19.32 per hour (use 92 n1-standard-4 equivalent servers in parallel); batch prediction: 6 hours free one-time use + $1.16 per hour (parallel use 5.5 N1-standard-4 equivalent server); online forecast: $0.21 per hour (1 n1-standard-4 equivalent server).

Video: After the first 1000 minutes of each month, 4 cents to 7 cents will be paid every minute.

Natural Language: After the 5,000th unit per month, you will need to pay $0.5 to $2 per 1,000 units.

Translation: After the first 500,000 characters per month, you need to pay $20 per million characters.

Media Translation: After the first 60 minutes of each month, you need to pay $0.068 to $0.084 per minute.

Text to speech: After the first 4 million characters per month, you need to pay $4 for every 1 million characters.

Speech to text: After the first 60 minutes of each month, you need to pay US$0.004 to US$0.009 every 15 seconds.

Dialogflow CX agent: 100 chat sessions cost $20, and 100 voice sessions cost $45.

Dialogflow ES agent: varies by mode, reflecting basic voice and natural language charges.

Recommendations AI: $2.5/node/hour for training and adjustment; the forecast of a quantity discount of more than 20 million requests per month is $0.27/1000.

GPU: 0.11 to 2.48 USD/GPU/hour.

TPU: You need to pay $1.35 to $8 per hour.

AI Platform Training: $0.19 to $21.36 per hour.

AI Platform Predictions: Need to pay 0.045 to 1.13 USD/node/hour, plus GPU price is 0.45 to 2.48 USD/GPU/hour.

platform

All services run on Google Cloud Platform; some can also run in on-premise facilities or containers

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store