AI models and products at Google — A full history and timeline

Uniqtech
Data Science Bootcamp
5 min readJul 11, 2024

--

This medium post discusses the rich history of AI models, landmark papers, and products at Google. Read about its history and timeline. Use it for your posts, subscribe and follow!

Google I/O Conference

Year Product
— — — — — -
2006 Google Translate
2013 Word2Vec Paper
2014 Google Acquired DeepMind
2015 TensorFlow
2016 DeepMind AlphaGo
2016 Tensor Processing Unit (TPU)
2017 Keras Libraries and APIs in TensorFlow
2017 Google Colab
2017 Attention is All You Need
2018 BERT Transformer Model and Architecture
2019 TensorFlow 2.0
2020 T5 Language Model
2020–21 Google LaMDA
2020–22 DeepMind AlphaFold
2022 PaLM Pathways Language Model
2023–24 Gemini Models

Key innovations, language models (LLMs), AI papers, architectures, including Gemini DeepMind, AlphaGO, BERT, T5, PaLM and more.

A history timeline of AI development and research at Google (AI generated image). All contents in writing are hand crafted, hand written.
Google has led the world of technology and software development and innovation for many years. It made significant contribution to the development of computer science. internet search and even contributed to foundational AI models which became the foundations for OpenAI.

This guide is a great resource if you have an interview at Google for Cloud , Machine Learning, or Artificial Intelligence (AI) or general AI jobs elsewhere.

This medium post will give you a solid overview of AI technologies at Google, their history and timelines. We will mention major dates, models, AI products and papers.

2006 Google Translate

Google Translate uses Machine Learning to automatically translate hundreds of languages. This AI enabled product was an art of advancement in Neural Translation, Machine Translation, using language pairs as training data.

(2014 Google Acquired DeepMind : neural networks that can learn video games, access short term memories Neural Turing machines)

2015 Tensorflow

Google released Tensorflow, an open-sourced library, for deep learning research and development. Tensorflow can be used to accelerate and scale machine learning and deep learning training jobs. Tensorflow 1.0 uses lazy mode, computational graph. TensorFlow 2.0 later uses eager mode, dynamic, pythonic evaluation.

Uniqtech posts are exclusively written for publishing on medium at uniqtech.medium.com. Please no repost, republishing, or modification.

2016 DeepMind AlphaGO

AlphaGO beat GO world champion. AlphaZero, the more generalized model, can learn games using reinforcement learning.

2016 TPU

At Google I/O, Google introduced TPU “Google’s custom development application specific integrated circuits (ASIC) used to accelerate machine learning workloads”.

2020 T5 Language Model

In a paper called Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, by by Colin Raffel et al, Google Research team introduced the T5 language model which used transfer learning techniques to achieve state of art benchmarks for various language tasks: summarization, question answering, text classification etc.

2020–2022 DeepMind AlphaFold

“In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database.”

2013 Googlers Published Word2Vec Paper

“Word2vec was developed by Tomáš Mikolov and colleagues at Google and published in 2013.” “Word2vec is a technique in natural language processing (NLP) for obtaining vector representations of words. These vectors capture information about the meaning of the word based on the surrounding words.” Word2vec was a significant contribution to modern Natural Language Processing (NLP) revolution.

Credit: written by Yu Sun exclusively for uniqtech.medium.com, no republication no repost elsewhere.

2017 Intergrates Keras Libraries and APIs into Tensorflow

Google intergated Keras into Tensorflow core, making the code more pythonic, easy to read, and gave developers simple elegant APIs to create neural networks for deep learning.

Enjoy what you read? Subscribe for Uniqtech posts and newsletters on Medium.

2017 Google Colab

In cloud, google doc-like data science and machine learning notebook-based IDE provided by Google. It provides generous in cloud GPU and TPU access. It’s Google’s Jupyter Notebook in the cloud, super charged with Google infrastructure.

“Google announcepd the 2nd generation of the company’s TensorFlow Processing Unit (TPU), now called the Cloud TPU, at the annual Google I/O event”

2017 Attention is All You Need (self attention)

Google Brain team introduced self attention architecture which is the important piece for transformers (made of encoders and decoders with self attention mechanisms). The attention mechanism can pay attention relevant context in texts before, after or around, even distant texts, making filling the blank — predicting the next word possible. This is the foundation of modern Large Language Models (LLM) whose main purpose is text-in, text-out, generating plausible next words that follow the input texts (prompts).

Transformers are breakthrough technologies, replacing predecessors such as RNN and LSTMs.

If you like the post so far, please clap for us: ← 👏🏻👏🏻

2018 Google Published the BERT Transformer Model and Architecture

Google published Bidirectional Encoder Representations from Transformers (BERT) in a paper. This model and architecture allowed models to learn language representations, embeddings and derive meaning from those representations. Transformers are at the core of modern AI revolution — the rise of Large Language Models (LLMs). While other ways to generate word embeddings and vectors existed prior to transformers, transformers lifted the entire industry to the state-of-art results that become today’s industry standards. Trivia: ChatGPT’s T stands for transformer.

2019 Tensorflow 2.0

Tensorflow 2.0 was a major update to the Tensorflow library moving on from lazy execution (graph compute based execution) to eager execution — immediately evaluated, more pythonic, simplified process. Tensorflow 2.0 provides Keras APIs and pythonic function executions.

2020–2021 Google LaMDA Conversation Technology

“LaMDA’s conversational skills … built on Transformer … That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next. … LaMDA was trained on dialogue. … it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. (e.g.sensibleness) … Transformer-based language models trained on dialogue could learn to talk about virtually anything … once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.”

2022 PaLM Pathways Language Model

PaLM: Scaling Language Modeling with Pathways introduced a 540-billion parameter model PaLM ‘dense decoder-only Transformer model trained with the Pathways system, which enabled us to efficiently train a single model across multiple TPU v4 Pods. We evaluated PaLM on hundreds of language understanding and generation tasks, and found that it achieves state-of-the-art few-shot performance across most tasks, by significant margins in many cases’.

2023–2024

The Gemini (formerly Bard, LLM), multimodal models and APIs are first announced by Google in Dec 2023, full launched with variations in 2024. Google already enabled several variations of Gemini, including Gemini Pro, Gemini Advanced in 2024.

Please comment to report any inaccuracies, add on or augment the blog post.

Subscribe to see our next publications: OpenAI timelines, history of NLP architectures. Request any future content here : https://ml.learn-to-code.co/message3.html.

Citations:

DeepMind and Google (Wikipedia), Paper Abstracts from arXiv https://arxiv.org, Keras History per fast.ai, Google Lamda engineering blog,

Meet the author : Yu Sun has been writing blog posts about technology since 2012. Initial topics: learn to code, coding bootcamps, JavaScript, Python, Codecademy. Rails. Later: data science, data analysis, which naturally tranisitioned into Machine Learning, before AI and LLM took over the world. Contact YS for technical writing.

This post is for exclusive publications on uniqtech.medium.com no repost, no posting else where. All rights reserved.

Further Reading:

Gemini announcement via Jeff Dean’s tweet https://twitter.com/JeffDean/status/1732415515673727286

--

--