What I Learned from ‘AI for The Rest of Us’
Key Takeaways: We can make AI equitable. AI can be used to unify humanity. Building trust in AI relies on better representation.
Introduction
This book is about the impacts of Artificial Intelligence on every human. Written by two distinguished experts in AI (and AI ethics), it explores the origins and implications of AI, good and bad, in a lucid and heartening prose.
If you’re relatively new to AI, this book will empower you to engage in discourse around AI and implore you to advocate for representation. If you read a single book about Artificial Intelligence in 2024, this should be the one. For AI practitioners and enthusiasts, it adds much needed context and a human-centric perspective often forgotten in today’s tech ecosystem.
About the Authors
Phaedra Boinodiris is IBM Consulting’s global leader for Trustworthy AI, having focused on inclusion in technology since 1999. She is also co-founder of the Future World Alliance, an organization dedicated to educating kids about ethical AI.
Beth Rudden is the CEO of Bast AI. She is a cognitive scientist, and has worked at IBM as a distinguished engineer. She’s an extraordinary leader and role model. I know Beth through my mentor Ron who, an excellent judge of character, describes her as ‘moving, touching and inspiring.’ Of course, my appraisal of Beth’s book is biased. One point this book makes is that all humans are biased and, by extension, so are our data.
Summary
‘AI for The Rest of Us’ closely examines artificial intelligence from an anthropological lens. The book is concise, at 140 pages, and approachable for neurodiverse readers. Each page is rich with content that feels interactive — like a conversation. The lessons within are intuitive, like universal truths, and are paired with carefully curated resources that whet the appetite for further inquest.
The book starts by laying out pragmatic definitions requisite to an informed dialogue about AI ethics, presenting constructs such as trust, ignorance, and ‘a conceptual model of data’ in perfect clarity. Statistical theorems such as the ‘diversity prediction theorem’ outlined in chapter 1 clearly establishes the importance of diversity.
- All models are biased because all humans are biased.
- If the training data are not representative of all humans, the AI will not serve all humans well.
- We can make AI equitable. It can be used to augment all people, it can be used as a tool to mitigate bias instead of amplifying it.
Organizations can also benefit from the considerations of this book. Explored within are the principles IBM, one of the world’s oldest and largest technology companies, have set with respect to AI. These ‘rules to live by’ are present throughout the book but are laid out with a detailed analysis of their importance. Finding a set of practices and principles to abide by can ensure we stay the course. This is true, not just for AI but for many things.
1. The purpose of AI is to augment human intelligence.
Without a purpose, Artificial Intelligence is useless. Before addressing. performance metrics, we can simply ask if an AI product is fulfilling it’s purpose. ‘AI for The Rest of Us’ explores the more complicated questions of ‘What’s the experience of being augmented like?’ and ‘What does it mean to be empowered by AI?’
2. Data and their insights belong to the creator.
Harvesting data without consent has become commonplace. People don’t always get the chance to ‘opt in’ to AI and in many cases (such as our engagement with government services and prospective employers) don’t have the choice to ‘opt out’. Digital rights are a crucial to principle 1 — we cannot empower people while also exploiting them.
3. New technologies, including AI systems must be both transparent and explainable.
AI can and should be explainable, especially when such algorithms are used to make high-stakes decisions about all humans. Making a meaningful decision with life-altering consequences demands the capability to provide a causal explanation.
It’s easy to see why we should abide by these three principles, we all have a stake in how AI is built and used. Less obvious is how. In their book, Rudden and Boinodiris place their emphasis on everyone’s part to play in building trust in AI. Inviting us to ask these kinds of questions about AI systems we interact with.
- Where is the data coming from? Be mindful that data may not be representative of the target audience. For this reason we must always consider the original source.
- What is the context of this data? What is the data not telling us? Included in context is whether consent was given to collect the data and if it’s feasible to expect that it may be used for this purpose.
- Who is this funded by? Also important to consider is the motivation behind the collection and storage of the dataset. There’s no such thing as a free lunch.
Insights
- The state of representation in the global tech sector currently leaves a lot to be desired, and “people who are not AI practitioners are not well represented”. We already know that predictive and generative models are prone to inheriting bias from their training data. We must advocate for representative data in order for AI to be inclusive.
- Bias is not a dirty word. Our subjectivity is what makes humans ‘human’ and our data (and therefore models) are destined to share this with us. The lesson here is not to treat AI models as objective, and instead to better understand their strengths and weaknesses. Beware of those who claim their AI models are free from bias.
- We must educate ourselves and our children about AI. The discovery of electricity set in motion a wave of change beyond comprehension. While this revelation brought enormous benefit, the associated risks soon became apparent. Since the advent of electricity, standards and certifications have been adopted to protect us.
- Our data are sacred and precious, we must protect the digital rights of all humans for the sake our future. We must treat data as an artifact of human experience and not sever it from context.
Above all else, the reader gains an awareness that we must all be involved in the creation of AI because it represents the forging of our future. Furthermore, the authors illuminate the social and political powers at play, and postures the rise of AI as a material inflection point for humanity. In closing, this book bestowed onto me a renewed (and refreshingly positive) perspective on the future of humanity and technology.