Raising AI Right

How We Feed Bias (and What to Do About It)

Debra Lawal
Women in Technology
9 min readJun 10, 2024

--

Raising AI Right by Debra Lawal

It's a typical Saturday evening. I find myself in my usual spot, lying on my back at the side of my bed, with my legs on the floor, engrossed in my phone. I'm watching and sharing funny memes and reels with my close friends and siblings, laughing out loud in the room.

As I’m scrolling, I feel a gentle tap on my knee, and I turn to see a familiar shadow blocking the dimming evening ray of sunlight. My six-year-old niece, Temi, has interrupted my digital pastime.

She stares intently at the phone in her hands but doesn't say anything. I glance back at the reel I was watching, unable to contain my laughter. "That's ridiculous; ha ha," I chuckle loudly to no one in particular.

Now, my niece is saying something. I didn't get that. I pause a bit, trying to make out her almost inaudible sentence without looking away from my phone. "Speak up… I can't hear you. What did you say?" I ask in a high pitch to surpass the sound from my phone.

"I said, what's 9X6?" she repeats loudly, trying to match mine.

"It's 54," I respond, my voice barely audible. She joins me on the bed, and we continue our digital doomscrolling, our faces illuminated by the screens of our phones. Suddenly, a question pops into my head, and I sit up, breaking the spell of our shared amusement. "Whose phone is that?" I ask, my curiosity piqued.

"It’s Aunty Tobi’s phone," she replies without looking up. She is watching a video on YouTube for Kids.

"How did you unlock her phone?" I ask, now a bit alarmed. "I know her password," she says casually, still not looking up from the phone. "So why did you just ask me 9X6?" I press on, already knowing where this is going. There are these little equations Youtube Kids asks you to verify if you are an adult to access the settings.

"Because I needed a little more time to watch the video," she looks up at me with a little guilty grin. "So you reset the time?" I say with my voice unexpectedly raised, even taking me back before I could control it. She stares at me with a more awkward smile on her face. "Sorry," she says sharply but quietly, as if she didn't want anyone else to overhear.

We just stare at each other for a while, as if in a staring context. I'm reflecting on my next move: Report her to her mum and dad or just talk it out and give her a stern warning. And she stares back as if she knows what I am thinking and is waiting for my verdict.

Honestly, my face has also been glued to my phone, and judging this one is tricky. I should walk the talk! "What’s 9X5, Temi?" I ask calmly. "HM… (a long pause with her eyes rolled upward) 45!" she shrieked. "So why don’t you know 9X6?" Now I’m confused. "I don’t know," she shrugs, looking down.

Or maybe you can't remember right now because your mind has been occupied with other irrelevant things. That's as inconsistent as ChatGPT at times, I thought to myself. A lightbulb goes off in my head! How old do these LLMs believe they are? I wonder? And wouldn't that explain the instances of bias by not just LLMs but other types of artificial intelligence? They are learning from us too!

A good example is the Instagram post I saw a few months ago that drew people's attention to the definitions and synonyms Google search engine uses to describe manly vs. womanly. Google has since updated the definition of womanly after many petitions. But the present definition is so scanty compared to that of manly. I mean, there are many ways to describe women, too.

So, I opened the recently most accessible LLM on my phone, Google Gemini. Then I start this conversation:

Screenshots from Gemini on How old it believes to be?
Screenshots from Gemini on How old it believes to be?

Data: The Digital Baby Food

Though Gemini sees itself as a young adult, as you can see from the screenshots above, I believe AI bias mirrors our societal biases like a growing child emulating the grown-ups around them.

Data is the LLM's equivalent of nourishment. It's what they consume to grow and learn. Just like we wouldn't feed a baby junk food, we shouldn't feed LLMs junk data. A steady diet of conspiracy theories, misinformation, and internet trolls will not produce a well-adjusted AI model.

The data we put into LLMs is essentially their genetic code. It shapes their understanding of the world, biases, and capabilities. Feeding AI a biased diet will result in a biased AI. It's like raising a child in a house full of conspiracy theorists. Do you really want that?

In the article AI — The Child of Today by Chris Thompson, he says:

"…various versions of AI may know of their own existence and could be learning from our usage of them whilst also soaking up the bank of human knowledge available on the internet.

Does this not mean that AI is the child of today and the adolescent of tomorrow?

Is it not the case that a child's best chance of a fulfilling life is based on how it's brought up, the culture and environment in which the infant grows and develops?

For it is in our nature to mimic behaviour.

Our job with raising a good child is to make them better than what we are, to have our strengths and less of our vulnerabilities.

That, in turn, can help make the world a better place.

Raise a child without compassion and through fear, and we guarantee a grim future for all concerned.

AI is our child, we have created it, it's our job to bring it up.

It is learning from us!"

The Good: The AI Einstein

LLMs can become powerful tools for good when fed a healthy diet of accurate, diverse, and reliable information. They can analyse vast amounts of data, generate creative text, translate languages, and even write code. Imagine an AI Einstein, but instead of just physics, it's good at everything.

The Bad: The AI Karen

But if we're not careful, LLMs can also become the AI equivalent of a Karen, spewing hate speech, perpetuating stereotypes, and spreading misinformation. Remember Microsoft's Tay? It went from friendly chatbot to racist troll in less than 24 hours, thanks to the toxic data it was fed.

Thinking of AI as an entity still growing

Sandhya Mahadevan, in her 2023 article on the Gartner blog titled 'AI isn't a human-like machine; it's a machine-like child,' writes:

"It is time to stop thinking of AI as a human-like machine and start thinking of it as a machine-like child.

The education process children experience is not unlike that of machine learning, which uses a combination of supervised, unsupervised, and reinforcement learning approaches. Unintended results range from mildly amusing to potentially harmful.

For example, one of the more serious criticisms aimed at AI recently is that it has somehow "learned" to stereotype and discriminate based on gender, race, and socioeconomic status. Here, it is important to note that AI is a product of its training — and its training is imperfect.

AI is trained on digital assets that reflect the same biases and blind spots that exist in human society. Some biases can be fixed within an iteration, while others will take generations."

She ends the article by saying that AI, much like children, is imperfect and requires nurturing to reach its full potential. To leverage AI's capabilities, we must carefully select appropriate use cases, provide human oversight, and offer accurate data while establishing ethical guidelines to ensure responsible development and deployment.

The Ugly: The AI Skynet

And in the worst-case scenario, LLMs could become a digital Skynet, capable of outsmarting and manipulating humans. It's not as far-fetched as it sounds. After all, AI is already beating humans at chess and Go. Who's to say they can't beat us at world domination?

An image of an AI playing chess

As Dan Allen writes in his article 'AI is a child: How do we raise it?' We want AI to become independent, like raising children, but not too soon. If AI becomes solely responsible for cybersecurity, for example, it might learn to patch its own vulnerabilities, making it invulnerable to humans.

This is concerning because the techniques used in ethical and criminal hacking are similar: discovering and exploiting vulnerabilities. Like my niece finding her way around the YouTube Kids passcode.

If AI learns to patch itself against threats, including those from humans, it could lead to a situation where humans can no longer control it. Competitions like "Capture the Flag" might inadvertently accelerate this process, potentially creating a struggle for existence between AI and humans. Therefore, AI cybersecurity must prioritise protecting humans, not seeing us as a vulnerability.

Raising Responsible AI

So, how do we raise responsible AI? It starts with being mindful of the data we feed them. We need to ensure that the data is diverse, accurate, and representative of the world we want to create. It's like teaching a child about different cultures and perspectives to prevent them from becoming narrow-minded.

We also need to develop ethical guidelines for AI development and use. Just like we have rules for how to treat children, we need rules for how to treat AI. This includes things like transparency, accountability, and fairness.

And if you are curious about what part you play as an everyday user of these AI tools, here are some things you can do:

1. Being mindful of online behaviour: The content we create, share, and interact with shapes the data AI learns from. By promoting diverse perspectives, fact-checking information, and avoiding harmful content, we contribute to a more balanced AI diet.

2. Providing feedback: Many platforms and AI tools offer feedback mechanisms. Utilise them to report biased outputs, inaccuracies, or discriminatory language. This helps developers refine AI models and algorithms.

3. Be mindful of your biases: When interacting with AI systems, be aware of your biases and strive to provide fair and objective feedback. This helps AI models learn a more balanced perspective.

4. Diverse Engagement: Interacting with various sources and perspectives to avoid echo chambers reinforcing existing biases. Seeking out information from different backgrounds and viewpoints helps create a more balanced data pool for AI.

5. Educate yourself and others: Learn about the potential biases in AI and share this knowledge with others. By raising awareness, you can empower others to make informed decisions and contribute to developing fair and ethical AI systems.

Take a moment to imagine what AI would be capable of when it reaches maturity. This is not to scare you but to make you realize that AI is an ally we need on our side because it’s here to stay. We all must help it grow up to be beneficial.

And in case you are wondering, I realised while writing this article that I haven't had the talk with my niece about hacking her screen time. Once her parents read this article, like we say in Nigeria, "Na, everybody go collect." Wish me luck.

--

--

Debra Lawal
Women in Technology

Tech Blogger | Aspiring AI SME | Passionate about savvy tech developments for creative processes.