Guardrails, Not Chains: Navigating the Ethics, Biases, and Global Regulations of AI

Tarrin Skeepers
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
7 min readMay 27, 2023

If you’ve been following our series, you’ll know that we’re deep in the AI rabbit hole, exploring the various aspects of this technology that has been shaking our world up like a snow globe. Just to recap, we’ve covered the evolution of AI, types of AI, machine learning, Deep Learning, and we recently chatted up a storm about Natural Language Processing (you can get caught up here). This time, we’ll navigate the not-so-funny world of ethics, biases, and the ever-evolving landscape of AI regulations. And just like navigating any labyrinth, let’s set some ground rules, mark some key milestones, and remember, no matter how lost you get, there’s always a way out.

Let’s kick off with fairness, a beautiful concept, and one that’s as elusive as a cat in a game of hide and seek. We’d love to think that our robotic pals, like AI, are as unbiased as a perfectly balanced seesaw. However, AI fairness is more like a seesaw with an elephant on one side and a mouse on the other — not quite balanced. Why? Simply because AI learns from our data and well, we humans have a long track record of both blatant and subtle bias throughout history. An infamous example, which I like to call the “Tale of the Not-So-Fair AI Recruiter,” showcases this bias breeding. Here, an AI trained on historical hiring data grew a penchant for male candidates over female ones, a naturally arising behaviour elucidated from the hiring demographics of the past. Not exactly the model of neutrality we were hoping for, huh?

That brings us to the other elephant in the room — accountability. Determining who takes the fall when an AI system makes a boo-boo is as complex as a triple-decker sandwich with layers of developers, users, and the AI itself. Is Alexa to be sent to detention when she drops the ball? Would that even fit within the school’s policy? Should the developers bear the brunt? Or perhaps the users who fed the AI? These questions are the unsolved Rubik’s Cubes of our times and continue to give tech ethicists restless nights.

The last pillar in our trifecta of ethical challenges is transparency. Have you ever been caught in one of those “Are you really okay?” or “What do you mean by ‘fine’?” debates with your partner? Well, understanding AI decisions can be quite similar. We’re often left in a fog, trying to figure out why, for instance, Alexa suddenly suggests jazz music when our playlist screams deep house. Idealistically, we want AI to be as clear as a summer’s day, laying out its decision-making processes like a map. But, as of now, that’s like expecting your goldfish to explain quantum physics (maybe he knows something we don’t?).

The path they take will be under our guidance.

Yet, amidst these quandaries, we must not lose sight of the larger picture. Remember how the advent of industrialization was accompanied by child labour, and with time, as society realized the harm, laws were enacted to protect children? We’re in a similar boat with AI, a boat that’s bobbing in a sea of rapid technological expansion and ethical ripples. It’s crucial to recognize these potential harms and create AI ethics that protect us and our future generations.

Now, let’s turn our attention to the global circus of AI regulations that are unfolding in our world. As AI has been snuggling into our workplaces, experts around the globe are urging companies to revise their HR policies to address the ethical implications and data security concerns surrounding its use. Amidst this brewing global debate, tech giants like Microsoft and OpenAI are coming forward with proposals for managing AI risks, while promoting an inclusive vision for the technology. The debate, however, has triggered a global question: who gets to regulate AI, and how?

While a number of strategies have been proposed, the European Union (EU) has stepped up to the plate, moving closer to passing the Artificial Intelligence Act. This law adopts a risk-based approach, categorizing AI systems into different risk levels and prescribing obligations accordingly. However, these new laws may come at the cost of stifling innovation, leading to concerns among AI developers, including OpenAI.

Not to be left behind, the US has begun Senate hearings discussing the potential impacts of AI on the economy and democratic institutions. There’s bipartisan support for the creation of a government agency to regulate AI systems beyond a certain capability threshold. The White House, too, is seeking public input on national priorities for mitigating AI risks.

Even on an international level, the G7 countries have agreed to launch the “Hiroshima Process” to govern AI, aiming to ensure that AI development is human-centric and trustworthy. This initiative is set to result in a series of cabinet-level discussions with results expected by the year-end. However, defining an international standard for AI regulation has its own set of challenges, given the varying societal values among countries.

In the very same manner as we maintain stringent measures over the world’s most destructive weapons, there’s a growing call to put robust regulatory frameworks around the deployment of powerful AI systems. We don’t want to wake up one fine morning and find out that a couple of super-criminals with advanced AI at their command have played out a real-world version of a sci-fi blockbuster, albeit one without a guaranteed happy ending.

Remember, it’s the same power of AI that can be used for nefarious purposes that can also be leveraged for creating these protective measures. Creating AI to police AI is one potential solution, though it does sound a bit like asking the fox to guard the henhouse. Regardless, it’s a tantalizing proposition that warrants serious exploration.

But is there an alternative? One could argue for a radical approach: let AI loose in the public sphere and trust in the age-old balance of good and evil. Leave it to the individuals to duel it out with their AI-powered weaponry and cross our fingers in the hope of landing in utopia, not dystopia. A risky gamble, indeed, and one we would need to consider very carefully.

The current discourse on AI regulation, while robust, is predominantly driven by the developed world. While this is understandable, given that these are the nations leading the AI revolution, there’s a glaring gap in representation. The impacts of AI won’t discriminate between the developed and developing world. They will ripple across every corner of our global society, affecting populations indiscriminately, regardless of geography or economic status. Hence, it is essential to extend this discourse to all nations, including those that might still be finding their footing in the AI arena. This isn’t just an ‘interesting-to-have’ addition, but an absolute necessity because every voice matters when defining the boundaries of a technology that is set to redefine our world.

However, extending the discourse is a challenge akin to herding monkeys. There are issues of technology disparity, linguistic barriers, and more fundamentally, differing cultural perspectives on AI and ethics. But, let’s be optimists here — after all, we’re a species that put a man on the moon, aren’t we? Some of the solutions could involve international collaborations to bridge the technology gap, using AI itself to break down language barriers, and creating global platforms for open discussions. The United Nations could play a pivotal role in ensuring this equal representation, much like it has in dealing with other global concerns.

Ignoring the views of the economically less privileged, the rural, or those with no access to AI at present could have dire consequences. It might lead to the creation of a skewed AI world, one that is oblivious to the needs, nuances, and narratives of a significant chunk of our global population. Such an AI ecosystem could exacerbate existing inequalities and create new ones, a risk we cannot afford to take.

Remember, the goal is not just to make AI for all, but also to ensure that AI is made by all, with the ethical compass of all. It’s only by including these diverse voices that we can hope to develop an AI framework that reflects the collective wisdom of humanity. This isn’t just an ethical imperative, but a pragmatic one as well, for a diverse AI is a robust AI, one that can better serve us all. From Africa, Latin America, South East Asia or the Pacific islands, voices need to be heard. Generations depend on getting this right, and no nation should be left in the wake of the AI revolution.

To put it simply, the discourse on AI regulation is more complex than the plot of a Christopher Nolan movie. Striking a balance between fostering innovation, ensuring the ethical use of AI, and mitigating potential risks is a tricky tightrope walk. And while the challenges are many, including threats to safety, rights, privacy, and the environment, AI also brings transformative opportunities in sectors like healthcare, education, and the environment.

Just remember, as we continue to uncover the mysteries of AI, the technology is a tool. Its effects, whether wonderful or destructive, are determined by us, the users. So, the responsibility to guide the development of AI lies squarely on our shoulders. As with any tool, the ethics we employ in its use are a reflection of ourselves.

It’s only through collaboration across all boundaries that we will find the right way forward.

As we conclude this ethical exploration, let’s remember that the road to AI ethics isn’t a sprint but a marathon. And just like any marathon, the journey is filled with milestones, challenges, and triumphs. Our next article (found here) will unveil how AI has quietly become an everyday companion, from Netflix recommendations to spam email filters to the current frontier of AI-powered digital assistants. So, buckle up and remember, it’s not just about setting guardrails, but ensuring that the road we’re on leads us to a destination that mirrors our values.

Until then, tech adventurers, keep the AI conversation going, because it’s the only way to ensure that we aren’t just building smarter machines, but a wiser humanity.

--

--

Tarrin Skeepers
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

Part time techie with a full time curiosity. Just trying to spread a little knowledge any way I can.