Child with a drawing of a screen superimposed over their face.
Image courtesy of GBH.

You Know, For Kids

the state of media literacy for young people in the age of generative AI

Bill Shribman
8 min readNov 9, 2023

--

It’s ambiguous: is artificial intelligence a tool, a weapon, or both? The twist with AI, I think, is much like the paradox in the Schrödinger’s Cat thought experiment: it’s a black box that might harm us or save us. We must hold both ideas at the same time as we think about what we, and our kids, need to know about AI, its negatives and positives.

A tools and weapons approach is a useful rubric for thinking about what guidance about the power and opportunity afforded by AI — broadly, what media literacy — we all need.

Here’s what I believe we need to consider in making media literacy effective for young people.

The Underlying Tenets of Media Literacy Still Hold True

I created and currently produce two media literacy series for PBS KIDS — Search It Up and Ruff Ruffman: Humble Media Geniusas part of my 25 years producing digital content for public media at GBH in Boston. We are using new episodes to showcase what AI is and what it can do, including how kids are using it to make art and text. I’m also working with my colleagues at the Berkman Klein Center on several media literacy initiatives around generative artificial intelligence.

Adults are excited, intrigued, amused, or are wringing their hands in equal measure around the growth of advanced computing and tools that can generate original text, images, audio, or video — loosely called generative AI. But what does this really mean in the context of kids?

In talking to young people, I get a sense that they need as much support in understanding AI as we adults do. And maybe they now need a tad more as generative AI further blends into their technology-rich lives.

They still need to know how media is made, that they themselves can make media, and that it has a purpose — even if it’s AI-assisted.

So, let’s start with some context: What do these three have in common?
The government.
An everyday person.
Elon Musk.

The answer is that they are the most common responses I’ve found in talking to 5th graders about who is responsible for what they find on the internet. There’s a similar range of replies when these 10-year-olds are asked who fact-checks the internet. Popular answers here include the government, no-one, and the ubiquitous Mr. Musk.

These kids are often called “digital natives” — born long after the demise of rotary phones, dial-up, Blockbuster, Myspace, and waiting for a letter in the mail. They are deemed native as if they are somehow born with an ability to reset the router or to attach PDFs to emails. I think they are not. Fish are surrounded by water but may be able to tell you little about it. Our young people need to learn, or be shown, how to stay safe online and how to benefit from the many opportunities access to boundless information affords.

Colorful cartoon dog, Ruff Ruffman, Humble Media Genius.
Ruff Ruffman: Humble Media Genius. Image courtesy of GBH.

It’s worth noting that AI is not new. It’s in many of the tools that have been in our hands for a while. For instance, Siri uses predictions to complete a text — and has been trying to break up my marriage long before it was fashionable for more advanced AI to do so. (“I’m in the woods with Natalie,” I texted my wife when she was out of town. My English accent and flaws in Siri’s speech recognition had turned our dog, Nellie, with whom I was enjoying a woodland adventure, into Natalie, my daughter’s twentysomething math coach, with whom I was not.)

Kids are already using AI every day if they’re online or on their phones. What do they actually know about it? The following responses are from 10-year-olds:

“It’s really smart, it’s so smart it can go to websites in its memory chip; it can take all the information and put it inside its brain.”
OK, that’s a little robot overlord-y, but it’s close.

“It’s not good the first time, it learns as it plays.”
That’s pretty much exactly how AI has beaten grandmasters in Go, Chess, and Jeopardy.

“The AI is making the picture, and the AI is coded by humans.”
That’s a pretty accurate view of generative art, although it bypasses issues of intellectual property. And of course, these new images based on real people are now pretty convincing. I still can’t believe that the deepfakes of Keanu are not Keanu. But maybe I don’t want to believe they’re fake.

As I share this now, it’s worth noting that we sometimes deliberately share fake information just as willingly as if we know it to be true. This is one of the confounding challenges around stemming more harmful misinformation and disinformation.

AI-generated image of Keanu Reeves washing dishes in a pink apron.
Image by @unreal_keanu via TikTok.

“If it can tell you are sick with a disease of some sort and can tell you about it before it gets too serious by noticing unusual things that don’t always happen on a daily basis.”
This is a great encapsulation of the medical world’s hope for AI, with already-proven success in protein folding.

“The world is full with AI’s and no one can be really sure.”
No, we cannot. And so, we should still consider how to help kids thrive in a world where ideas of provenance, authorship, intention, bias, and even why we share information with each other, are increasingly fuzzy.

We Can Use AI as a Tool to Challenge Disinformation

There is a belief that AI could weaponize phishing to be more targeted and more plausible. Conversely, reverse image search, an AI-assisted tool, let me investigate a suspicious friend request from someone who looked like a Danish sea captain and was, it transpired, a Danish sea captain — at least the pictures used were. His affable images, I discovered, had been misappropriated and used as a siren call all over the world in phishing attacks.

Facebook friend request panel, showing a request from Chancel Ndongala Ndongala of Paris, Kentucky.

We Should Avoid Exacerbating Inequalities

If we are not vigilant, new technologies can have a tendency to exacerbate existing digital divides by, for example, creating a heavy reliance on expensive devices or tools. The current generation of generative AI tools relies, at minimum, on having an internet-connected device. Although the mechanics for data sent as cellular data, wi-fi, or Bluetooth are perhaps similar, their differences can be huge for those with limited means, limited data plans, or low bandwidth connectivity. Unless we think intentionally about ensuring equitable access, many children will be under-equipped to use new AI technologies.

We Must Think Creatively about the Medium of Media Literacy

School-based media literacy may provide some of the answers to helping kids learn more, but the presence of formal instruction varies by state, from none to some. The demands on the school day and the multiplicity of technologies can make integrating media literacy instruction challenging for any educator. We must understand the needs of teachers as we develop in-classroom supports and scaffold them with professional development materials as needed.

That said, we know media literacy messaging works well when it’s either baked into media that kids are already consuming or is standalone content that they gravitate to, whether that’s through video, social media, or digital games. For example, we use both of these approaches at GBH; our episodes of Molly of Denali often model positive uses of media and technology.

Eight panels from the cartoon Molly of Denali.
Molly of Denali. Images courtesy of GBH.

Future Proofing Media Literacy Education Is Key

The usage of generative AI is moving swiftly, with over 5,000 tools now claiming to have AI support and with many being integrated into tools and software kids are already using. Being strategic about what kinds of media literacy to address is key.

This is especially true for those of us making media about technology, professional video, or a high quality media literacy game. These can take months to produce, if we even find the funding to begin with. Our resulting work often has a long tail of use, so for both of these reasons we must be careful to future-proof what we provide, and to not overly focus on one single tool. The Ruff Ruffman: Humble Media Genius videos have been viewed over 100 million times, so getting the message right, with as much timelessness as possible, is important.

And as a new generation of AI tools become intertwined with what our kids interact with — in their searches, in the algorithms that suggest what to listen to or, more importantly perhaps, what friends see their posts, and in the work they do at school — we should take stock and assess whether we’re headed into stormy seas or wide open blue oceans. (That Danish sea captain clearly has left his mark on me.) We should provide media literacy that kids want to engage with; it can’t feel like just another civics lesson.

We Can Learn from the Past to Inform Our Future

There is very little research yet about generative AI, and so we in public media, and many of our colleagues across academia, are trying to conduct it. As a stop-gap, we’re leaning on studies of related technology: for example, how kids interact with chatbots can be informed by 10 years’ study into how they interact with digital voice assistants like Siri; these in turn often look back at prior research into Human Computer Interaction.

How we have evolved to use other digital tools can help us consider how we might use AI. For example, many of us have grown to trust Wikipedia but perhaps wouldn’t use it for a crucial medical diagnosis. And when was the last time you crossed-checked Google Maps directions with a paper map before trusting your computer-proposed itinerary? In other words, we decide the trust we place in every new tool we use.

Generative AI is in many ways exciting, new and challenging, and I believe we can and must equip young people with the critical thinking skills to help them use AI effectively.

This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.

--

--

Bill Shribman
Berkman Klein Center Collection

Senior Executive Producer & Director of Digital Partnerships at GBH in Boston.