Crack the AI Code: Get Ahead With These Six Buzzwords

ChatGPT told us the most important terms you’ll need to know to sound like the most knowledgeable person in the room

Matan Gans
Byte-Sized Insights
4 min readApr 26, 2023

--

Image via www.vpnsrus.com

It’s almost impossible to hear anything about artificial intelligence today without being bombarded with vaguely-explained jargon. At Byte-Sized Insights, we want to make AI an accessible topic for all. That starts with learning how to talk about it. So, I asked ChatGPT to list the top buzzwords that everyone should know when discussing AI. Let’s dive into the most important ones and what they all mean.

Artificial Intelligence

Of course, you cannot converse about artificial intelligence without knowing what the term “artificial intelligence” actually means. Artificial intelligence, or AI, is the study of how computers can carry out tasks that an intelligent human could perform. This involves developing algorithms that learn from data to make predictions or take actions in new situations. To dive a little deeper into what this is all really about, you can take a look at this article, but what you should know for now is that AI is the broad umbrella term that encompasses all of the following buzzwords.

Machine Learning

Machine learning, also known simply as ML, is the foundation of most artificial intelligence applications out there today (the rest is “Good Old-Fashioned AI,” AI methods that existed before advancements in machine learning). What we’re talking about here is the actual algorithms and models that automatically learn from patterns in training data to make decisions and predictions about unseen examples in the real world. For instance, when we spend a lot of time with a person, we eventually learn their preferences for movies and TV shows. This is the basic concept behind the machine learning recommendation systems of Netflix and Amazon. They learn from our previous user data to offer movies and books that we may want to engage with next.

Deep Learning

A subset of machine learning, deep learning refers to algorithms that are built to mimic the workings of the human brain. A deep learning model is structured as an artificial “neural network” of interconnected outputs at multiple layers which give it its “depth.” Essentially, this architecture is a huge set of weights between nodes that resembles a learned set of rules for making predictions, similarly to the way (we think) the brain operates as a web of neurons that send signals through their connections to potentially hundreds of thousands of other neurons. Because of their sheer size and ability to create complex representations, deep learning models back some of the most successful artificial intelligence technologies.

Natural Language Processing (NLP)

A term that’s been making headlines more recently—thanks to ChatGPT and all of our new favorite chatbots—is natural language processing. Natural language processing is a subfield of artificial intelligence that involves using methods based on machine learning and deep learning techniques to teach the computer to understand and generate language that humans can understand. NLP involves not just computer science, but also linguistics, cognitive science, and psychology. Some of these technologies include customer service chatbots, translators like Google Translate, text summarizers used in healthcare reports, and financial tools that derive insights and make stock predictions from social media posts.

Computer Vision

Computer vision is another field in the artificial intelligence space that we hear a lot about, namely because it powers technologies like Snapchat and TikTok filters, object detectors in self-driving cars, virtual and augmented reality, and the generative art models being developed by companies such as Stability AI and Midjourney. Just as NLP is meant to get computers to process language the same way humans do, computer vision aims at teaching the machine to understand image and video data like us. Our computers now have the ability to identify the different people, places, and objects in visual media.

Big Data

We can’t avoid the topic of AI without big data. That’s because machine learning and all of the tech we’ve been telling you about hinges on huge training data sets. As mentioned before, the process of teaching these models to make inferences from real-world patterns requires a lot of examples. A self-driving car will only learn what kinds of objects to avoid on the street if it has seen millions of images before and learned its own rules about what a person or animal looks like as opposed to an empty crosswalk sign. You might be asking at this point, how big is the “big” in big data? GPT-3, OpenAI’s language model that came before ChatGPT, was trained on around 45 terabytes of text data. Another example is the ImageNet dataset, which is used to benchmark the performance of image recognition models (using, you guessed it, computer vision), which contains over 1.2 million labeled images. That’s definitely big.

Hopefully, knowing these terms will make you a little more confident as you enter this journey of learning about and discussing AI with us. Check out these Byte-Sized Insights articles for further reading, and keep following us to explore more of the terms and trends that will help you in preparing for an AI future:

What Is AI, Really?

Is AI Hype Real?

Innovator’s Dilemma and ChatGPT

--

--

Matan Gans
Byte-Sized Insights

Software Engineer | Writing About AI @ Byte-Sized Insights