I’ve said this before and I’ll say it again, let’s up the drama in HCI. What do I mean by that? I mean recognizing that the fancy and fun tools we create need to be a sidenote; not the main attraction. I mean recognizing that digital literacy, reliance on technology for health or safety, privacy concerns, and digital manipulation can be life or death situations. Sometimes that means actual death; such as in war or healthcare scenarios. Other times it means an absolute misuse of power; supporting the prison industrial complex or horrific manipulation of children. “Life or death” situations might refer to having such a warped Facebook newsfeed that it leads to ethnic cleansing or severe online harassment and silencing of marginalized users. For those of you who haven’t thought along these lines, or have passingly thought about how HCI can harmful but how it can also be good, I encourage more reflection. For those of us who want to do better and feel stranded, read on. While I opened this post with doom and gloom, I promise that I will try my very best to optimistically operationalize what we can do for real. Many of us have thought about all these terrible side effects of technology, and we either feel powerless or frustrated or tired or simply want to focus on something more positive and easier. Those feelings are valid, and we all get tired sometimes. But I’ve found a few ways to sit with my frustrations, and they’re weaving their way into my vision of a Machine Learning literate world.
I dream of a world where a patient walks into the doctor’s office after receiving results from an AI-mediated imaging scan. The patient knows that their original images were taken in another country or with an outdated machine. They ask “was the training data used on your algorithm diverse enough to compare to my previous scans from a different machine?”
I dream of a world where a mother can advocate for her children and her family against a social services algorithm that determines that her child welfare score is low. I dream of a world where she can ask “was this decision based on my geographic neighborhood? My race? My finances? Was only the public data used? What if I have private insurance? Will that data be used? How did you evaluate the success of your model?”. And I even dream of a world where that social worker can answer back, and there can be a critical discussion of what the model missed and how it is failing, and how it can be improved.
I dream of a world where people demand accountability from big tech companies, but I also dream of a world where those big tech companies have fallen into the hands of everyone. Because everyone; from policy makers to business people to elementary school teachers to Uber drivers to doctors to fashion designers to circus performers to accountants to programmers all have a basic grasp on AI and can hold a critical conversation with literacy and participation. Reading, writing, basic math, history, art, physical fitness, health, science, programming and How-To-Understand-Machine-Learning-Insert-Better-Name-Here will be our cornerstones.
What is the theme of this dream? It is self-advocacy. I imagine a world where we can each independently begin searching for the information we need. Eventually we may need an expert, but with a little bit of literacy we can go a long way. We can learn to ask the right questions to the experts, develop opinions beyond affective judgments, and help add to innovation and creativity for what can and should be created. I dream of a world where our data literacy is so fluent that we can advocate for our children in all their uniqueness; our friends in all their specificity; and ourselves in all our wonder. I imagine my work as a grassroots approach to Machine Learning literacy, where the power is kept in the hands of the people.
What is the theme of this dream? It is self-advocacy.
Now, even when individuals have power they can be thwarted from making real change. There are powerful companies and institutions and societal practices that can make self-advocacy seem futile. But guess what! As a research community, we can each find our place in the Points of Intervention model. There are the people at Microsoft, Facebook, Google, Amazon etc who can find their own radical developer inspiration and start infusing these ideas into the minds of the powerful people around them. There are researchers who can see how effective the different tactics for literacy are in different marginalized groups. There are plenty of privacy and ethics researchers who will actively seek to shut down the systems that simply shouldn’t exist in the first place and that no one needs to be literate about. There are K12 teachers who will nourish young minds as they learn and grow with technology. And there are people who will design small ways to keep the human in the loop as they learn what Machine Learning can do.
This doesn’t mean to not be critical of your fellow researchers. But it means to find the point of intervention that best fits your talents, your passions, your abilities, and your visions for a machine-learning-literate world.
There are a million ways we can facilitate Machine Learning literacy. It takes our community to work on each problem and approach the widespread opacity that is AI from several different points of intervention. Keep this in mind as we argue amongst ourselves about the most important issues on the table. Let’s split up the work and listen to one another as we all go down the journey of exploring a relatively new and important space (machine learning literacy and explainable AI). This doesn’t mean to not be critical of your fellow researchers. But it means to find the point of intervention that best fits your talents, your passions, your abilities, and your visions for a machine-learning-literate world.
Disclaimer: I know that not all of HCI deals with Machine Learning. And I understand what I don’t understand, which is each subfields’ own view on how it is helping the world. Accessibility tools, healthcare mediation, education etc. These fields might not work with social machine learning at all, and are doing great things in the HCI world. There is also great work on social justice for immigrant families and technology, sex workers, and more. I’m definitely not saying that HCI doesn’t have its heart in the right place. All I want is a little more attention and a little more focus to how technology impacts everyone in an often classist and racist way. We can’t all have personal robots or VR in our house (yet), and the millions of black and brown workers creating the latest tech don’t seem to be as represented in HCI conversations.