Get Started with Generative AI: Free Courses Offered by Google Cloud
Learn the fundamentals of cutting-edge AI applications for non-technical enthusiasts on Udacity
Earlier this week, Udacity released four free courses on Google Cloud, providing an introduction to the fundamentals of Generative Artificial Intelligence (Generative AI or GenAI). These courses are designed to be beginner-friendly, requiring no prior experience in AI. In less than three hours, I completed all four courses and gained a concise understanding of Generative AI, Large Language Models (LLMs), and the BERT Model.
As a product manager with limited exposure to AI projects, these courses served as an ideal starting point for me to grasp essential concepts. The courses presented relatable use cases and provided accessible resources, allowing me to easily comprehend complex topics. If you, like me, have an interest in the field of AI and aspire to stay updated on the latest trends, but feel unfamiliar with AI development, I highly recommend exploring the course overview below. It will help you determine if investing your time in these courses is worthwhile.
1. Introduction to Generative AI with Google Cloud
Course components: a 22-minute video, suggested readings, and 5 quiz questions
Keywords: Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Supervised and Unsupervised Learning, Generative and Discriminative models, Generative AI, Prompt Design
The course covers the following topics:
- Establishing the relationship among AI, ML, and DL, providing a foundation for understanding their interconnectedness.
- Highlighting the distinctions between supervised and unsupervised learning, helping you grasp their different approaches and applications.
- Investigating Generative and Discriminative models, shedding light on their unique characteristics and how they contribute to the AI landscape.
- Defining Generative AI and unraveling its inner workings, enabling you to comprehend how it generates new and innovative outputs.
- Addressing challenges specific to Generative AI, including the notion of hallucinations, and emphasizing the importance of prompt design in mitigating such challenges.
- Exploring various Generative AI model types, expanding your knowledge of the diverse approaches within this field.
- Introducing Google Tools like Bard and GenAI Studio, showcasing practical resources that can enhance your Generative AI journey.
2. Introduction to Large Language Models with Google Cloud
Course components: a 15-minute video, suggested readings, and 4 quiz questions
Keywords: Large Language Models (LLMs), Pathways Language Model (PaLM), Questions Answering (QA), Prompt, Tuning, and Parameter-Efficient Tuning methods (PETM)
The course covers the following topics:
- Defining Large Language Models (LLMs) and discovering how they intersect with the field of Generative AI.
- Uncovering the benefits of utilizing LLMs, highlighting the advantages they bring to various applications and domains.
- Exploring the Pathways Language Model (PaLM), a Google-designed model for LLMs.
- Comparing LLM development using pre-trained APIs versus traditional development methods, providing insights into different approaches and their implications.
- Examining Questions Answering (QA) in Natural Language Processing and delving into Generative QA, offering valuable insights into the mechanisms of answering questions using LLMs.
- Understanding the importance of prompt design and prompt engineering, and how they impact the performance and output of LLMs.
- Exploring the three main categories of LLMs: Generic (or raw) language models, instruction tuned models, and dialog tuned models, providing an overview of their specific applications and use cases.
- Going deeper into the concepts of tuning, fine-tuning, observation, and Parameter-Efficient Tuning methods (PETM), uncovering strategies to enhance and optimize the performance of LLMs.
3. Attention Mechanism with Google Cloud
Course components: a 5-minute video and 7 quiz questions
Keywords: Translation model, encoder-decoder models, Attention Mechanism, Neural Networks, Machine Translation
The course covers the following topics:
- Introducing a Translation model based on the encoder-decoder framework, providing an overview of its structure and functionality.
- Understanding how Attention Mechanism differs from traditional models, and unraveling the inner workings of this innovative approach.
- Exploring the mechanism through which Attention Mechanism improves translations, shedding light on its transformative impact on the field of Machine Translation.
4. Transformer Models and BERT Model with Google Cloud
Note: This course is more technical and is better suited for individuals who have a foundational understanding of data structure and programming.
Course components: two 11-minute videos, 9 quiz questions, and lab resources on GitHub
Keywords: Transformer Models, Bidirectional Encoder Representations from Transformers (BERT), Natural Language Processing, Masked language modeling (MLM), Next sentence prediction (NPS)
The course covers the following topics:
- Tracing the history of language modeling and providing a brief overview of its evolution.
- Introducing Transformer Models and understanding their mechanisms.
- Exploring various pre-trained transformer models, such as encoder-decoder models (BART), decoder-only models (GPT-3), and encoder-only models (BERT).
- Understanding the BERT Model and its significant impact on enhancing Google search capabilities.
- Unpacking the two core tasks performed by BERT: Masked language modeling (MLM) and Next sentence prediction (NPS).
- Investigating the three essential embeddings utilized by BERT: token, segment, and position.
- Examining examples of downstream tasks where BERT excels, highlighting the model’s versatility and applicability.
- Walking through practical implementation guidance with a lab walkthrough of the BERT Model.
Selected Readings from the Courses
Here are some suggested readings to deepen your understanding of specific topics covered in the courses:
- What is generative AI? by McKinsey & Company provides an accessible overview of Generative AI, featuring examples like ChatGPT and DALL-E. It explains the concepts of ML and AI and delves into the challenges and ethical considerations of the field.
- Build new generative AI powered search & conversational experiences with Gen App Builder by Google Cloud demonstrates how to create generative AI applications using the Gen App Builder, with examples of customer service applications such as retail chatbots and search quality improvement.
- Google Research, 2022 & beyond: Language, vision and generative models by Google Research explores Google’s advancements in language, computer vision, and generative models in 2022, providing insights into their future vision.
- Google Cloud supercharges NLP with large language models by Google Cloud highlights how Google Cloud leverages large language models to enhance natural language processing capabilities.
- Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance by Google Research introduces the Pathways Language Model (PaLM), which achieves impressive performance in language understanding, generation, reasoning, and code-related tasks.
- PaLM API & MakerSuite: an approachable way to start prototyping and building generative AI applications by Google for Developers presents the PaLM API and MakerSuite tools, designed to simplify the development of generative AI applications.
- Transformer: A Novel Neural Network Architecture for Language Understanding by Google Research introduces the Transformer architecture, a groundbreaking neural network architecture for language understanding developed in 2017.
Enjoy your learning journey!