AI-ght, What’s All This Then?

Gems in STEM: A Comprehensive Intro to Artificial Intelligence

Apoorva Panidapu
Geek Culture


Jarvis, please pull up some quick articles to teach me about AI…Jarvis? JARVIS?

Oh wait, my bad, I forgot that you’re not real outside of Marvel. Please excuse me, I’m just going to go sob in the corner while Siri tells me she “didn’t quite get that” in an endless, torturous loop.

If you’re the singular person on Earth who has never seen an MCU movie and you didn’t quite get that, absolutely no worries (but I hope you move to a more exciting rock soon)! I’m messing with you, here’s the rundown: J.A.R.V.I.S. is a fictional AI system created by billionaire genius Tony Stark, essentially a virtual assistant that can do anything from making predictions from enormous piles of data to mimicking human language (and occasionally cracking a joke), which we’ll soon see is harder than it seems!

Right now, some of you may be thinking WTF (Well, That’s Fantastic), but I don’t know what this has to do with anything? Patience, young grasshopper. If you haven’t already guessed it, today we are going to be learning about AI, i.e. Artificial Intelligence (which is what our dear J.A.R.V.I.S. is)!

What is AI?

Let’s get right into it: what exactly is AI? Well, think about it this way: just like artificial flavors are designed to mimic natural ingredients, artificial intelligence is designed to mimic human intelligence. In short, AI is about figuring out how to make machines so smart that they can solve problems without our extensive help.

Right now, there’s a bit of a misconception in various media about what AI is, from Hollywood to sci-fi–it’s often thought of as this scary, futuristic threat. But, the reality is that AI is already here and exists all around us–it just doesn’t eloquently speak or attempt to eliminate us…yet ;). It powers search engines, recommends your next Netflix binge, and is revolutionizing science and healthcare: radiologists can use it to detect tumors down to the exact shape and volume, and astronomers use AI to find exoplanets in distant solar systems. It has even been used to appeal parking tickets for free, and overturned over $3 million in fines in just a couple months! I just started learning how to drive, SIGN ME UP. (For those wondering, it is called DoNotPay and has now turned into a “robot lawyer”!)

The possibilities of what AI can achieve are endless, from fraud prevention to addressing climate change to enhancing our digital media experience. In fact, there is already AI-generated music that some find indistinguishable from songs created by people!

So, current AI can accomplish specific actions like booking meetings, managing online shopping recommendations, driving a car–whatever it’s told to do. It’s fantastic at analyzing huge amounts of data in order to complete these specific tasks, but AI isn’t so good at transferring these skills to other tasks, learning things after one-try, or grasping abstract concepts–which are all part of human intelligence. Moreover, it is not self-aware and can’t actually think like a human. These AIs are also not strictly creative, though Amazon Alexa’s song “It’s Raining in the Cloud (When My Wi-Fi Left Me)” is a musical masterpiece.

More intelligent machines?

So, what if a machine could carry out all these tasks, exercise creativity, and more? This is where we get into sci-fi-ish territory–this more advanced intelligence is called artificial general intelligence (AGI), which is when a machine can understand, learn, and carry out any task that a single human can. This is followed by an even broader type of intelligence: Artificial Super Intelligence (ASI), which are machines that are smarter than all collective minds on Earth. That’s pretty spooky (which is fitting because Spooky Season just wrapped up)! But fear not–neither AGI nor ASI exist currently, and most people think it is a looong ways away.

How exactly do we create AI? The main tools and methods you’ve probably heard before are Machine Learning, Deep Learning, Reinforcement Learning, and Natural Language Processing. That’s a lot of big, slightly intimidating words that don’t currently make much sense–so let’s dive right into demystifying them!

Machine See, Machine Do?

Have you ever been confused as to why the news is always like data this, data that? Or wondered why everyone is so concerned about data privacy? Sure, you don’t want your social security number leaked, but maybe you thought that personally you wouldn’t mind if people saw your basic info because it saves you the time of downloading a dating app? No??…Okay, moving on!

The reason data is so important is because the more data we collect, the smarter we can make machines–which is exactly what machine learning (ML) does. Machines learn from huge data sets and use their knowledge to respond to situations they’ve never seen before! So, it’s a pretty intuitive step that more data is better training for the algorithm/machine, which produces more accurate outputs.

How is this different from the old approach? Well, the traditional method was to show your algorithm a fixed data set and, for each set, to tell them exactly how to respond. But with machine learning, the machine has the power to learn and produce new behaviors that aren’t explicitly programmed…which, if you think about it, is very similar to human intelligence! We’re taught specific skills and then are able to adapt this knowledge in unfamiliar situations. So, it makes sense that machine learning is an important method for artificial intelligence.

Okay, this is just a whole lot of talk right now–let’s get into some details and examples!

Supervised vs Unsupervised Learning

Let’s pretend our machine recently listened to Billy Joel’s “Piano Man” and now desperately wants us to teach it how to play the piano. We probably can’t just leave it alone in a room with a bunch of sheet music–it wouldn’t know what to do with them! Instead, we would start by teaching it the correct finger positions for each note–this is called supervised learning! In general, supervised learning trains a machine with a dataset with labeled points, and tells it what the correct response/decision is. After this training, we give the machines new, unfamiliar data to respond to, and we cross our fingers and hope that it has enough training to make good decisions on its own (kind of like what my parents will do when they send me off to college). So, for our piano-playing machine, we could give it different sheet music or a different tempo, maybe even a different instrument (even though the machine has its heart set on piano), and see what it does!

Supervised learning should be used when you have known, labeled data for an outcome you’re trying to predict. Say I wanted to figure out if my emails are genuine or spam. (I want to keep helping you, prince of Nigeria, but I’m running out of money and have yet to see profit!) To do this, I would use a specific type of supervised learning: classification. Classification techniques are used to sort data into categories, like speech/writing recognition or medical imaging. So if you want a quick way to organize huge amounts of data into discrete groups, classification is your salvation! (…I know, but you try coming up with a good word that rhymes with classification.) A real world application of supervised learning, you ask? Clinicians can use patients’ data (like age, weight, blood pressure, medical history, etc.) to predict whether they will have a heart attack within a year–really important stuff. Can you imagine actual people sorting through all this data to try and accurately make a prediction for thousands of patients? There-in lies the power of machine learning.

Now, what if I want to predict the time it takes for my best friend to respond to texts (decidedly less important than predicting heart attacks, but it’s currently way too long)? This can’t really be put into a category :(…but have NO fear, regression is here! While classification is to predict discrete situations, regression techniques predict continuous responses–like stock prices or temperature changes. (Mother Earth won’t like that one.) If your data is continuous and the responses you’re trying to predict are real numbers, it’s regression or REGRETsion. ❤

Okay, we’ve talked about supervised learning, but if I’m being totally honest…I am too lazy to label my data or to teach a machine what to do with it. Does that make me a bad person? Of course not, it makes me a brilliant (and gorgeous) ML engineer! Just like we have supervised learning, we also have unsupervised learning. In this type of machine learning, the training data given to the machine/algorithm is unlabeled and unsorted, and we let it figure out how it wants to label the data and draw its own inferences. This process can obviously be much harder than supervised learning–it’s like me handing a baby a bunch of random books and seeing what happens (but if Matilda can teach herself how to read, so can you bébé). However, unsupervised learning can reveal hidden patterns and structures in the data that humans might not have been able to notice. The most popular type of unsupervised learning is clustering, which is when the algorithm groups its training data into similar categories. Clustering techniques are currently being leveraged for things like gene sequence analysis and object recognition!

So Many Techniques! Which One Do I Use?

If you’re going through a tough time trying to pick which ML algorithm to use, don’t worry! (Pitbull has already been there, done that.) Even very Smart and Experienced data scientists can have trouble with this too–sometimes you just need to use good ol’ fashioned trial and error to find the best algorithm for your purposes! However, that doesn’t mean pick one randomly–there is a bit of method in this madness. A good first step is to consider what kind of data you’re working with!

If you want to train your machine to make specific predictions based on your (labelled) data, go for supervised learning! Some examples include predicting house prices from its data (like square footage, number of rooms, etc.), predicting weather conditions, or identifying if an image is a cat or dog (very important).

If you want your machine to explore unlabelled data and draw inferences/find patterns, unsupervised learning is your gal! You could use this technique in recommender systems (grouping together users with similar interests) or to detect fraud!

To provide a bigger picture of how machine learning is changing the world, it’s being used in image processing (like Facebook’s automatic tagging), self-driving cars, and healthcare (predicting patient deterioration, detecting eye disease, and more). It is also used to analyze text, from spam filtering to extracting relevant information to sentiment analysis (like identifying an opinion as positive, negative, or neutral), which is being leveraged to try and combat cyberbullying! I could go on and on and on and on…but don’t worry, I’ll spare you my rant.

But while machine learning has been a fantastic tool, it’s not powerful enough yet to mimic human intelligence for more complex data. NOOOOO, WHAT DO WE DO?!?!!?! Alright, take a breather pal. Don’t forget that some of the smartest people in the world work on these problems! Since machine learning isn’t enough, drastic times called for drastic dives: we’re going to dive deeper into…deep learning!

The Deep Blue Learning of the Se-AI

In the mid 20th century, many people were brainstorming what the best way to emulate intelligence would be for AI. I don’t know if it was incredibly genius or incredibly vain that they decided that the answer was to mimic our own brains. To be specific, people started trying to create a mathematical model for the human brain!

Let’s say we want to teach a baby how to identify a cat. Now, this baby doesn’t know anything, so she randomly points at all sorts of objects, stating, “CAT.” Luckily, we can tell her, “No, that’s not a cat,” or “Yes, that’s a cat!” if she gets it right. Slowly but surely, the baby will gain an understanding of how to identify a cat based on their features, even across different types and breeds. Unknowingly, this baby is narrowing down an abstract concept (a cat) by constructing a hierarchy where each layer of abstraction is informed by the knowledge of the previous layer. Weird, huh?

Basically, deep learning takes this simple concept of inputting an abstraction into multiple layers of neurons, which progressively extract more and more specific features from raw data based on which neurons “fire” off to give an output. These layers of artificial neurons are what we call a neural network–a majorly simplified version of our brain.

The “deep” part of deep learning refers to the depth of layers in the neural network. And as the number of layers increases, so does the neural network’s ability to learn more and more abstract concepts–because it’s like adding a degree of specificity.

Let’s think about an example to make it make sense: Have you ever marveled at the ability of your Photos app to recognize your face in your emo phase 10 years ago? That’s deep learning hard at work! To learn how to recognize human faces, the first layer of the neural network takes pixels from some example images, passes on this information, then the next layers learn the concept of how pixels form an edge. They then pass this knowledge of edges onto the next layers, which then learn the concept of a face, and so on this process of layering knowledge continues until the neural network algorithms recognize specific features, and thus specific faces!

While we won’t get into the math behind these magical tools (since it’s outside of the scope of this article), let’s quickly talk about the most common types of neural networks! Multi-layer perceptrons (MLPs), otherwise known as feedforward neural networks, consist of an input layer, hidden layer(s), and an output layer. These models are trained with huge amounts of data and are key to things like translation software, computer vision (which is how machines can analyze visual digital inputs), natural language processing (which we’ll talk about in a sec), and more!

Deep learning also often uses convolutional neural networks (CNNs), which are very similar to feedforward networks. The difference is that each neuron in a CNN layer receives input from a specific area of the previous layer and nothing else–this area is called the receptive field. CNNs leverage linear algebra (especially matrix multiplication) and are generally used for visual data, like image/pattern recognition and computer vision.

In these two types of networks, signals go through the layers just once. But what if it went through more than once? Recurrent neural networks (RNNs) have feedback loops, and are generally applied to things with time-series data, like predicting sales or looking at the stock market.

In order for deep learning to be effective, it has to be accurate. But in order for it to be accurate, it needs massive amounts of data and processing power to train with this data–which aren’t always readily available. However, since deep learning can create outputs and distinguish patterns directly from unlabeled and unstructured data (which is most of our data), it is an immensely powerful tool.

Okay, I think we’ve dived deep enough in the ocean of deep learning, let’s switch to something else before we get crushed by pressure!

Trick or Treat Learning

DOGGOS. They’re smart and adorable, but they can be mischievous and destructive. What’s the best way to train them? T-R-E-A-T-S. (I’m spelling it out because otherwise I would summon all dogs in a 1 mile radius.) Though simple, the underlying strategy is to define an interactive reward system that helps your dog learn using trial and error with constant feedback.

This is exactly what reinforcement learning (RL) is, but for machines! Reinforcement learning is related to both machine learning and deep learning, but it uses rewards and punishment as feedback to teach the machine (instead of just telling it what the correct response would be). This technique works best in robotics or when teaching agents (machines) how to play video games, with the goal of maximizing the agent’s total reward.

We won’t really get into the specifics of reinforcement learning, but one of RL’s most famous accomplishments is when Google DeepMind’s AlphaGo became the first computer program to defeat a world champion, Lee Sedol, at the incredibly challenging game, Go, where the number of possible positions on the board is greater than the number of atoms in the universe.

Reinforcement learning is continuing to be used to teach AI how to play computer games, in industrial automation (like MIT’s mini robotic cheetah), and optimization in stock and healthcare!

Is ABC Really That Easy?

Okay, it seems like we’re already doing some really cool things with AI through machine learning and deep learning…why don’t we already have something as advanced as J.A.R.V.I.S.? Well, we don’t realize it, but there’s a LOT that goes into having a simple, organic conversation with someone else, and it is really hard to teach a machine how to replicate it. The branch of AI that tackles giving computers the ability to understand human language, text, and communication is called natural language processing (NLP). (This kind of understanding is called natural language understanding (NLU)–these AI people really love their acronyms, huh!)

What makes you laugh? I don’t know about you, but my sense of humor is completely broken–the most random things make me laugh. Humor is not at all straightforward and depends a lot on context and references and environment and tons and tons of other things (like inside jokes), and shows why NLU is so hard to achieve. So, while I desperately want to see an AI standup comedian on SNL, chances are that won’t happen any time soon. :(

Teaching an understanding of texting is equally, if not more, challenging. My mom can’t keep up (though she would argue she is a Very Cool Mom), so how in the world could a machine even hope to parse all the whims of internet culture?! Take, for example, a keyboard smash. Seems simple, right? WRONG.

Keyboard smashes can be used to explain shock/excitement and other emotions that hugely depend on context, but not only that, it has to look right. For example, AGDFJKAHG looks fine, but YOUIUUIUUYO doesn’t really look right and does not give what needed to be gave.

If this distinction makes no sense to you, I have some unfortunate news for you…you are old. But that’s okay! It just shows how hard it is to make things like this make sense to a machine. (Think about all the strange conversations you’ve had with Siri or Alexa.)

So, if machines ever manage to take this one small step for man, yet giant leap for machine-kind in human communication, they’ll be exponentially more intelligent and capable of traditionally human skills, like critical thinking, forming connections, and maybe even writing sensical essays/stories! This advancement would pull artificial general intelligence (which we talked about earlier) into the realm of possibility and doomsday will be upon us…I mean we’ll have new friends!

AI-ght, What About The Future?

If we’ve learned anything from this long, long article, it’s that AI isn’t robots trying to take over the world, it’s humans trying to understand and replicate our own intelligence to make life easier and to accomplish things that would take us years to do in a matter of minutes.

AI is even being applied to analyze art as easily as a human, create art(like poetry and paintings), and even prove mathematical theorems! It has accelerated research across all fields, such as DeepMind’s AlphaFold 2 which, in mere hours, can predict a protein’s 3D structure–which has longbeen a huge challenge in biology.

While AI is capable of greatly changing the world for the better, we have to seriously consider the potential effects of it for all people–not just the often privileged people creating it.

To Be or Not To Be Ethical

As the world starts embracing AI and adopting it on a wide-scale, bias in AI systems could disproportionately affect certain groups. For example, there has been an application system that discriminated against women and those with non-European names and a criminal justice algorithm that mislabeled Black defendants as “high-risk” twice as often as it mislabeled white defendants. Questions on how society wants to use AI must be addressed while continuing to identify and eliminate human biases from AI. The more of us, particularly women and BIPOC, who engage in shaping AI’s development, the better chance we have to develop a better, fairer future with AI.

Finally, if there’s anything else I hope you learned, it’s that AI eats data for breakfast. It needs tons and tons of data in order to get smarter, so that it can more accurately find patterns in all sorts of different situations–like cultivating your Daily Mixes on Spotify to finding recommendations for you on Netflix.

This means that the future of AI is reliant on data privacy. If it doesn’t have data to teach it, AI can’t get smarter. So, all users and people must know that their personal data will be secure and protected if companies ever want to use it for AI. Thus, corporations will have to commit and be held accountable to creating safe and secure products.

To continue addressing these future concerns for AI, the Global Partnership on Artificial Intelligence launched in 2020, to ensure that AI is developed with democratic values and human rights in mind and to foster the public’s trust in it.

But, it’s interesting to note that AI is based on the assumption that human intelligence can be understood and exactly quantified to the extent that one could replicate it in a machine, which creates some controversy on whether we can make AI that are indistinguishable from humans. This forces the question: “What makes us human?” Would AI have the ability to feel, and thus suffer?…Man, I don’t have time for another existential crisis.

And that’s it folks! You’re now ready to become a millionaire in the tech world…but don’t quote me on that.

I’ve got one last question for you: will you be my AI-friend? Because I can be your dataBAE ;) <3

Until next time! If you found this interesting, make sure to check out the next column! If you have any questions or comments, please email me at

Extra! Extra! Read All About It!

If you find yourself wanting to eventually build your own little J.A.R.V.I.S., here are some links for further exploration and to see what cool things other people are doing! Go crAIzy.

Open AI (AI research lab)

DeepMind Lab (open source platform for AI research):

Project Malmo (Experimentation platform built on top of Minecraft to support AI research)

Top AI Platforms for Business (2020)

More on Natural Language Processing

Reinforcement Learning Problems:

Udacity: Intro to Deep Learning with PyTorch

Keep up with the latest AI news with this newsletter!

To be the first one to hear about all my new articles, recent events, and latest projects, make sure to subscribe to my newsletter: Letter? I Hardly Know Her!

This column, Gems in STEM, is a place to learn about various STEM topics that I find exciting, and that I hope will excite you too! It will always be written to be fairly accessible, so you don’t have to worry about not having background knowledge. However, it does occasionally get more advanced towards the end. Thanks for reading!



Apoorva Panidapu
Geek Culture

Math, CS, & PoliSci @ Stanford. Advocate for youth & gender minorities in STEAM. Winner of Strogatz Prize for Math Communication & Davidson Fellows Laureate.