The Power of Tech for Good: Why AI Needs a Moral Compass

Gracefrances
5 min readJun 6, 2023

--

The emergence of Generative AI and Large Language Models have taken the world by storm, bringing Conversational AI to the forefront of awareness and dinner-table discussions across the globe. There is a sense of awe and fear in equal parts as we speed towards an increasingly digital future.

Things have been heading into the uncanny valley lately: some convinced that AI is sentient, while others are falling in love with their AI companions, and nearly everyone is second-guessing the legitimacy of something they’ve seen online. All of this has brought up existential questions around ethics, transparency and how to tame the phenomenon that is Artificial Intelligence. Since tech leaders — such as AI pioneer, Geoffrey Hinton– have started voicing their fears around the existential risk of AI, ethical concerns around these impressive technologies are posing tough questions, and the need for regulation becoming even more urgent.

‍The Ethical Dilemma

Ethics are at the crux of many a robust discussion: with philosophers, academics and ethicists pondering the great existential questions of our time for millennia. These enquiries finally culminated in the human rights framework used in international and Australian law today. While AI advances bring up many new considerations, the CSIRO stress that there is no need to rewrite these laws; rather they need to be updated to ensure they can be applied to AI technologies.

As it stands, there is no global governing body for AI ethics; rather, it falls upon individual countries to determine the definition of responsible AI. This means that practices may differ significantly depending on the location, which could result in serious implications for those in countries with lenient regulations. This could be particularly concerning for more vulnerable countries, that might be socially and culturally impacted by the potential misuse of AI.

I’ve heard it said that: “just because you can do it, doesn’t mean you should”. These are exciting times, reminiscent of the sci-fi movies of our youth; yet, unlike a movie, what we create now will have far-reaching, real-life impact. So, if the good of humanity and the planet are not at the heart of everything we design — what are we designing for?

‍Conversational AI

Working within Conversational AI, we’re in a powerful position: designing products that are changing the face of communication, customer service, and essentially how we interact with the world. It’s a big responsibility.

As a conversational interface, Conversational AI works with our most inherently human trait: to communicate. Conversations are powerful and have the potential to shape our days and even our worldviews. As Conversation Designers (supported by tech wizards) we create human-like interactions that streamline processes and simplify life, ideally with a bit of charm and ‘surprise and delight’ factor thrown in for effect.

Yet, when designing Conversational AI systems, there’s more to consider than just a charming persona. There are many elements at play, so how can we ensure we are designing ‘tech for good?’

‍Human-Centred Design

Ethics can seem quite conceptual — however AI ethics researcher Emma Ruttkamp-Bloem states that ethics are not about abstract principles, rather should be a bottom-up process. For those of us working in the field, there are imperative questions to ask to ensure that we are designing in a way that will benefit people and the planet.

Implementing responsible AI systems requires thoughtful, human-centred design. To mitigate potential risks — such as algorithmic bias — the design process needs to ensure that the technology has a positive impact on individuals, society and the environment. To create safe and secure AI products we need to commit to enhancing data privacy and security, promoting transparency and accountability, and implementing ethical considerations from the early stages of design.

The race towards AI progress has resulted in AI systems that lack ethical groundwork, yet robust ethics will be integral if we are to guarantee safe and trustworthy products. At the recent American Senate Artificial Intelligence Hearing, lawyer and politician Mr. Blumenthal argued that sensible safeguards are not in opposition to innovation, rather they are the foundations of how we can move ahead in technological advancement, while upholding human rights and dignity.

‍Collaborative Intelligence

As AI capabilities become more dazzling by the day, fears around AI taking over human jobs are rife, with AI taking on previously human jobs such as translation or customer service.

Yet rather than jumping on board the AI-Takes-Over-The-World rhetoric, the more pertinent question might be: how might we use AI to complement our human strengths?

AI is not capable of original thought yet has powerful analytical and quantitative capabilities; while humans struggle to process vast amounts of data but have originality, creativity, empathy and critical thinking. By adopting a collaborative approach to AI, we can leverage the power of the technology to enhance our own processes and skills. This ‘augmented intelligence’ can empower people to do more efficient work by giving them access to invaluable insights and data on the fly — cutting out the mundane aspects of a job and enabling greater meaning and productivity.

Using AI to complement — not replace — human strengths will allow us to maximise our intrinsically human skills and push the limits of our innovation and creativity.

Human Oversight

Think of a toddler: she cannot be left alone, needs constant attention, training and care, and is essentially a little sponge, soaking up her surroundings, mimicking her parents and parroting any expletives she hears. With supervision and guidance, she gradually learns the ways of the world and her right from wrong.

An AI is just like this: it cannot be given free rein, learns from the data its fed, and needs human oversight to learn from its interactions and ensure language models are correctly trained. Having a ‘Human in the Loop’ (or even better, a diverse team of them) is essential to keeping the AI on track.

At this stage, the ability to train an AI model to behave appropriately 24/7 remains an enigma, making human judgement and oversight essential.

As designers and developers working at the cusp of great societal change, we have a responsibility to ensure that our designs are built with fairness, inclusivity, and equality at the forefront. However, the scope is too vast, and the implications too real for those working in AI to shoulder these responsibilities alone.

AI is a powerful tool that can be used to address some of our most pressing environmental and social issues — yet to truly flourish, rigorous standards will need to be set and upheld, requiring a collaborative effort between governments, stakeholders, academics, developers and designers alike. When grounded in strong, human-centred ethics, these game-changing technologies will have the potential to pave a more equitable and inclusive path forward for humankind.

--

--

Gracefrances

Conversation Designer at VERSA Connects, writing about language, linguistics and responsible AI.