The Role of Systems Thinking in Creating Unbiased AI Systems

Sundarapandian C
Kinomoto.Mag AI
Published in
16 min readMar 27, 2024

Today, we use AI, or artificial intelligence, a lot more than we used to. AI helps computers and machines do tasks that usually need human intelligence. This means they can learn, make decisions, and solve problems. For example, AI is used in recommending videos on YouTube, deciding who gets a loan, and even driving cars without a person steering.

But there’s a problem called bias in AI. Bias means the AI might favor or ignore certain types of information or people without a fair reason. It’s like when a teacher picks the same student every time to answer questions, even though there are other students raising their hands too. This happens because AI learns from data, which is a lot of information that we humans give it. Since people have biases, the information can have biases too. For instance, if we mostly show AI pictures of apples as red, it might start thinking all apples are red, which isn’t true.

People say bias in AI is unavoidable, which means it’s really hard to stop it from happening. This is because AI systems learn from what we know and what we give them, and none of us are perfect. We all see the world a little differently, so the information we give AI can make it biased. Also, deciding what is fair isn’t always clear. What seems fair to one person might not seem fair to another. That’s why figuring out how to deal with bias in AI is tricky but very important.

The Nature of Choice and Bias

When we make choices, like picking a favorite ice cream flavor or deciding who our best friend is, we don’t just randomly decide. Our decisions are influenced by our experiences, what we like or dislike, and even what we’re used to. This is what we call bias. It’s like having a sneaky little voice in our heads that whispers, “You’ve always liked chocolate more than vanilla, pick chocolate!” even before we taste the new vanilla flavor that just came out. This isn’t always bad, but it means we’re not starting from zero every time we make a choice; we’re leaning a certain way because of our past.

Now, when it comes to AI, these machines learn from us. We teach them how to think and make decisions by feeding them lots of examples. But here’s the catch: if we’re biased, the examples we give them are probably biased too. Let’s say we’re teaching an AI about what makes a good movie. If we only show it action movies because that’s what we like, it might think all good movies must have car chases and explosions. That’s not really fair to all the great comedy or drama movies out there, right?

So, when we use AI to help us make decisions, we need to remember that it’s a bit like a student learning from us. If we’re always showing it the world from just one angle, that’s the only angle it’s going to know. This means the AI can end up making decisions that aren’t balanced because it’s following the biased examples we gave it. This doesn’t mean AI is bad; it just means we have to be really careful about teaching it to see the world from as many perspectives as possible.

Economic Perspectives on Bias

When we talk about the economy, we’re talking about how money and resources are used in our world. This includes everything from how much things cost to how jobs are created. Now, believe it or not, the economy can affect AI too, especially when it comes to bias in the information AI learns from.

Imagine you’re trying to teach an AI to help figure out the best ways to reduce poverty or to help countries develop economically. The information you use to teach the AI can have biases based on economic conditions. For example, if most of the data comes from very wealthy countries, the AI might not understand the challenges faced by poorer countries. It’s like trying to explain what it’s like to be hungry to someone who has always had plenty of food.

Let’s take a closer look with an example. If an AI is being used to decide where to send aid to help people in poverty, it needs to understand what poverty looks like in different places. But if the AI’s information mainly comes from cities and ignores rural areas, it might wrongly assume that poverty is the same in both places. Rural poverty might involve not having access to clean water or schools, while in cities, it might be more about overcrowded living conditions. If the AI doesn’t get this, it might suggest solutions that don’t make sense for everyone.

This kind of bias happens because the data sets the AI learns from are not complete. They don’t include enough variety to show the whole picture. It’s like trying to paint a landscape using only one color. You can’t capture the full beauty of the scene without a full palette of colors.

In short, economic conditions can shape the information AI systems learn from, leading to biases. This is really important when we use AI to tackle big issues like poverty alleviation and economic development because we want to make sure the AI’s suggestions are helpful and fair for everyone, no matter where they live or how much money they have.

The Role of Business and Commerce

Businesses and companies play a huge role in how AI is developed and used in our world today. This is because creating AI technology often requires a lot of money and resources, and businesses are usually the ones that have both. But businesses also want to make a profit-that’s how they keep running. This focus on making money can influence the kind of AI systems they choose to develop and how these systems are used.

Imagine you’re a company that’s creating an AI system. You have to decide: do you spend your money on an AI that helps make the world a better place but doesn’t make much profit, or do you develop an AI that can make you a lot of money but doesn’t help as many people? Sadly, many companies might choose the second option because it looks better for their bank accounts. This is what we call a “bias towards profitability.” It means that instead of making decisions based on what’s best for everyone, decisions are made based on what’s going to make the most money.

Here’s how this can become a problem. Let’s say a company creates an AI that recommends which movies people should watch next. If the company makes more money when certain movies are watched more often, the AI might start suggesting those movies more, even if they’re not the best choice for everyone. This means the AI isn’t really helping people find movies they’d love; it’s helping the company make more money.

This situation shows a risk that comes with mixing business interests with AI development: the things that are best for making money aren’t always the things that are best for people or society. For example, an AI could be used to solve problems like hunger or to help fight climate change. But if these uses don’t make a lot of money for businesses, they might not get as much attention or resources.

In the world of business and commerce, the way AI is shaped by the pursuit of profit can sometimes mean that the biggest benefits of AI aren’t shared by everyone. This can create a world where AI helps widen the gap between the rich and the poor instead of bringing us closer together. So, it’s important for businesses to remember that making a positive impact on society can be just as important as making a profit.

Change Management in AI Development

Changing something that’s already set in its ways can be really hard, whether it’s a habit we’ve had for years or an AI system that’s been doing its job in a certain way. When we talk about making changes to AI systems, especially to reduce bias, we’re up against some big challenges. It’s like trying to teach an old dog new tricks, but in this case, the dog is a complex computer program that affects a lot of people.

The Challenges

First off, changing an established AI system is tough because these systems can be very complicated. They’re like giant puzzles with millions of pieces. If you change one piece, you might affect a bunch of others in ways you didn’t expect. This makes it tricky to reduce bias without messing something else up.

Also, a lot of these AI systems are deeply woven into our daily lives. They suggest what movies to watch, help doctors diagnose diseases, and even decide who gets loans. Because they’re so important, changing them isn’t just a technical issue-it affects people’s lives. It’s like trying to fix a plane while it’s flying; you have to be really careful not to cause a crash.

Strategies for Change

So, how do we make these changes without causing more problems? “Switch” by Chip Heath and Dan Heath gives us some clues by showing how to guide change in a positive direction. Here are a few strategies inspired by their ideas:

Find the Bright Spots: Look for examples where the AI system is less biased or making decisions fairly. Understanding what’s working well can help us apply those lessons to other parts of the system.

Shrink the Change: Big problems can feel overwhelming. Instead of trying to fix everything at once, start with small, manageable changes that can have a big impact over time. It’s like cleaning a messy room by starting with one corner.

Shape the Path: Make it easier for AI developers to make ethical decisions by providing clear guidelines and tools for identifying and reducing bias. If the path is clear, it’s easier to follow.

Rally the Herd: Change is easier when everyone’s on board. By creating a community of AI developers, researchers, and users who care about reducing bias, we can share ideas and encourage each other to make ethical choices.

Keep the Destination Clear: It’s important to have a clear goal. In this case, our goal is to create AI systems that are fair and unbiased. Keeping this goal in mind helps guide the decisions we make along the way.

Making changes to reduce bias in AI isn’t easy, but it’s definitely possible. By tackling the problem step by step and working together, we can guide AI development in a direction that’s not only more ethical but also better for everyone involved.

The Hook of Bias in AI

Imagine you’re playing a game that learns what you like and keeps showing you more of it to keep you playing longer. This is similar to how some AI works. It learns from your choices and keeps feeding you similar things, creating a cycle. This cycle, or feedback loop, can keep you “hooked,” but it’s not always for the best. When AI is biased, it means it’s not fair or balanced right from the start. So, if an AI keeps showing you things based on biased information, it can make those biases even stronger over time.

Feedback Loops and Their Impact

A feedback loop happens when an AI system uses its own decisions to learn and make future decisions. If the system starts off with a bias, like preferring one kind of music over another because of the data it was trained on, it will keep reinforcing that preference. For example, if an AI recommends songs to you based on what it thinks you like, but its choices are biased, you might keep getting recommended the same type of music. This doesn’t just limit your choices; it can also prevent you from discovering new music you might like.

These loops can have bigger consequences, too. In serious settings, like job hiring AI, if the system is biased towards certain resumes, it might keep favoring similar candidates, making it hard for others to get noticed. This can make unfairness and discrimination worse in the long run.

Ethical Considerations

Designing AI systems that hook users isn’t bad on its own. It’s how companies keep us engaged with their apps and websites. However, when these systems are built on biased data, they can lead to ethical problems. They can keep us in a bubble, showing us only what they think we want to see, based on flawed or narrow views. This can limit our experiences and reinforce stereotypes.

So, when creating AI systems, it’s crucial to think about the data they’re trained on. Is it fair? Does it represent everyone equally? Also, how these systems are designed matters. They should help us explore and grow, not just keep us stuck in a loop of the same old things.

Moving Forward

To address these issues, developers and designers need to be aware of the potential for bias in AI. They should work to identify and correct biases in the data and algorithms. Also, it’s important to create systems that encourage discovery and diversity, not just repetition. This way, AI can help us expand our horizons, not limit them.

In summary, while AI has the power to keep us engaged and make our lives easier, we need to be mindful of how it’s designed. By addressing biases and focusing on ethical design, we can ensure that AI systems benefit everyone fairly, leading to a more inclusive and diverse digital world.

Fostering Creative Confidence in AI

In the book “Creative Confidence” by Tom Kelley and David Kelley, the authors talk about unleashing the creativity that lies within all of us. They argue that everyone has the potential to be creative and that tapping into this creativity can lead to remarkable innovations. When we apply this idea to AI development, it opens up exciting possibilities. By encouraging creativity in the way we design and develop AI systems, we can come up with innovative solutions to tackle the issue of bias, making these systems fairer and more equitable for everyone.

Innovation in AI Development

The first step in fostering creative confidence in AI is to recognize that bias is not just a technical problem; it’s a creative challenge. This means stepping back and looking at the problem from new angles. Instead of asking how we can tweak existing algorithms to be less biased, we might ask how we can completely rethink the way AI systems learn from data. Could there be a totally new approach that hasn’t been considered yet?

Encouraging teams to think creatively about solutions to bias also means creating an environment where it’s safe to take risks and make mistakes. This might involve setting up “hackathons” or brainstorming sessions focused specifically on addressing bias in AI, where the wildest ideas are welcome.

Case Studies: Creative Approaches to Fairness

There are already some inspiring examples of creative approaches to making AI more fair:

Diverse Data Collection: One team of developers realized that their facial recognition AI was not as good at identifying people of color. They responded by launching a global call for diverse facial images, drastically improving the system’s fairness. This solution came from understanding the creative challenge of gathering more diverse data, rather than just tweaking the algorithm.

Ethical AI Frameworks: Another group approached the problem by developing a set of ethical guidelines for AI development. They created a framework that helps developers consider the social impact of their AI systems at every stage of development. This creative solution moves beyond technical fixes to embed fairness into the entire design process.

Bias Bounties: Inspired by the concept of “bug bounties” in cybersecurity, a company introduced “bias bounties.” They encouraged users and independent researchers to find and report biases in their AI systems, rewarding them for their findings. This innovative approach leverages the creativity and insight of a wide community to improve fairness.

The Power of Creative Confidence

These case studies show that when we encourage creative thinking in AI development, we can find unique and effective ways to address bias. This doesn’t mean ignoring the technical aspects of AI design; rather, it’s about combining technical skills with creative thinking to explore new solutions.

Fostering creative confidence in AI development means believing in the possibility of change and innovation. It’s about looking at the challenges of bias not just as obstacles to overcome, but as opportunities to design better, more equitable AI systems. By encouraging creativity and innovation, we can move closer to creating AI that serves everyone fairly and justly.

The Future of Bias in AI

In the era of artificial intelligence, bias is an inevitable companion, deeply rooted in the very fabric of human cognition and societal constructs. This intrinsic characteristic of AI, much like the shadow to the body, emerges not from the technology itself but from the inputs it is fed-crafted, curated, and coded by human hands. Our decisions, infused with the nuances of personal experience and societal conditioning, become the blueprint upon which AI models are built.

Drawing parallels to the profound insights of “The Fifth Discipline” by Peter Senge, particularly the concept of systems thinking, we find ourselves standing before a tap controlling the flow of water. The water, in this metaphor, symbolises the biases flowing through our AI systems, shaping decisions, influencing outcomes, and molding futures. The critical question then becomes: do we control the water, or does the water control us? Systems thinking urges us to see beyond the immediate, to understand the complex interplay of actions and feedback loops that define our systems. In the context of AI, it challenges us to recognize how our biases, when embedded into AI, can create self-reinforcing cycles that either perpetuate inequality or drive us towards equitable solutions.

The potential of bias in AI to exacerbate social inequalities is stark, threatening to deepen divides if left unchecked. Without a conscious effort to understand and mitigate these biases, AI systems could inadvertently cement the very disparities they have the power to dismantle. However, armed with awareness and intent, we possess the capability to harness AI as a formidable ally in our quest for social justice. By leveraging its analytical prowess, we can unearth subtle biases and structural inequalities, paving the way for informed interventions and systemic change.

This juncture calls for a coalition of diverse minds and disciplines-a fusion of technology, ethics, economics, and civic participation. Technologists bring to the table the architectural blueprints of AI, ethicists provide the moral compass guiding its use, economists assess its impact on the fabric of society, and the public, the ultimate beneficiary, offers a lens into the real-world implications of these technologies. Together, this multidisciplinary force can forge AI systems that are not only reflective of our societal diversity but are active participants in fostering a more equitable world.

A Systemic Approach to Action

Inspired by the principles of systems thinking, our approach to mitigating bias in AI must be holistic, recognizing the interconnectedness of all contributing factors-from data collection and algorithm design to deployment and feedback mechanisms. It challenges us to ask not just how we can tweak algorithms to be less biased, but how we can transform the entire ecosystem of AI development to be more inclusive and just.

Feedback Loops as Tools for Change: Just as systems thinking highlights the importance of feedback in understanding system behavior, we must implement robust feedback mechanisms in AI development to identify biases and trigger corrective actions.

Leveraging Leverage Points: In every system, there are leverage points-places within a complex system where a small shift in one thing can produce big changes in everything. Identifying and acting on these leverage points within AI systems can be a powerful strategy for reducing bias.

Fostering an Adaptive Learning Culture: Embracing the ethos of a learning organization, as described in “The Fifth Discipline,” can empower AI developers and stakeholders to continually adapt and refine AI systems in response to evolving understandings of bias and fairness.

As we chart the course for the future of AI, let us wield the tap with intention, directing the flow of water not haphazardly, but towards the nourishment of a garden where equity and fairness bloom. In this endeavor, our collective creativity, wisdom, and action are the most potent forces at our disposal, capable of transforming the landscape of artificial intelligence into one that mirrors the best of our shared humanity.

Recommendations for Further Reading

The journey towards understanding and mitigating bias in artificial intelligence is complex and multifaceted. To deepen your insights into this critical issue, the following books provide valuable perspectives that span technology, ethics, economics, and systemic thinking. Each of these works contributes to a more nuanced understanding of bias in AI, offering tools, methodologies, and philosophical considerations that can inform efforts to create more equitable AI systems.

“The Art of Choosing” by Sheena Iyengar

Relevance: This book explores the psychology of choice and decision-making, shedding light on how biases influence our choices. Understanding these underlying mechanisms is crucial for identifying how AI can inherit human biases and for developing strategies to mitigate these biases in AI decision-making processes.

“Poor Economics” by Abhijit V. Banerjee and Esther Duflo

Relevance: Banerjee and Duflo’s examination of economic decision-making in conditions of extreme poverty provides insights into how economic contexts can shape data and algorithms. This work underscores the importance of considering diverse socioeconomic conditions when addressing bias in AI applications aimed at economic development and poverty alleviation.

“The Ecology of Commerce” by Paul Hawken

Relevance: Hawken’s discussion on the interplay between business practices and environmental sustainability offers a parallel to the impact of commercial interests on AI development. This book highlights the need for ethical considerations in AI, especially concerning biases that prioritize profitability over social good.

“Switch” by Chip Heath and Dan Heath

Relevance: “Switch” provides a framework for effecting change in organizations and systems, which is directly applicable to the challenge of altering established AI systems to reduce bias. The Heath brothers’ strategies for change can guide efforts to redesign AI systems in ways that are both effective and ethical.

“Hooked” by Nir Eyal

Relevance: Eyal’s exploration of how products and technologies engage and influence users can help us understand the mechanisms through which AI systems can perpetuate biases. This book is particularly relevant for examining the ethical implications of designing AI systems that influence user behavior and decision-making.

“Creative Confidence” by Tom Kelley and David Kelley

Relevance: The Kelley brothers advocate for unleashing creativity within all spheres of development and problem-solving. Their approach is essential for encouraging innovation in AI development to address and mitigate bias, suggesting that creative thinking is key to redesigning AI systems for greater fairness.

“The Fifth Discipline” by Peter Senge

Relevance: Senge’s work on systems thinking offers a powerful lens through which to view the problem of bias in AI. By understanding AI systems as part of larger societal systems, we can identify leverage points for change and develop more holistic approaches to mitigating bias.

Originally published at https://mindfuldiscoveries.in on March 27, 2024.

--

--

Sundarapandian C
Kinomoto.Mag AI

Self taught Designer, UX enthusiast, passionate in Photography, Believes in sustainable farming