It’s Indigenous Peoples’ Day. Your AI Assistant Might Tell You Otherwise.

Ariam Mogos
The Startup
Published in
8 min readOct 12, 2020

If you ask one or more AI assistants today “who discovered America?”, the alarming response you might get is:

“Christopher Columbus… Americans get a day off work on October 10 to celebrate Columbus Day. It’s an annual holiday that commemorates the day on October 12, 1492, when the Italian explorer Christopher Columbus officially set foot in the Americas, and claimed the land for Spain. It has been a national holiday in the United States since 1937.”

It’s Indigenous Peoples’ Day and there are many reasons this response is a problem.

First, let’s start with the facts. Christopher Columbus did not discover America. He initiated the “Columbian exchange”, which established settler colonies through violence and genocide. The dominant Eurocentric narrative about Columbus discovering America has been challenged and delegitimized for decades.

Second, whose narrative is being centered? When this question is asked, the perspective and history of Indigenous peoples of America, who were in fact here first, is not addressed or recognized by this AI assistant. Why does this seem like a big deal if many of us know that Christopher Columbus didn’t discover America? Because there are still many people, including young people in the United States and across the world who don’t, have been taught to accept this dominant narrative, or have yet to learn about this time in history. This AI assistant is disseminating and reinforcing a false historical narrative to thousands, if not millions of people, and this was a design choice.

All the information that disseminates from technologies like AI assistants are design choices made by people, and people can either choose to reinforce oppressive narratives or amplify the histories of those who have long been oppressed.

What if, when someone asks an AI assistant “who discovered America?”, the epistemology, knowledge and perspectives of Indigenous peoples’ of America disseminated from over a billion AI assistants?

“Who discovered America?” is just one question. What other questions are there for us to discover, unpack and make space for critical discourse? How might we re-center non-dominant perspectives through technology to advance social justice and equity?

Controlling the minds of the masses.

By the year 2026, the AI market is expected to reach $300.26 billion and one of the primary factors driving that demand is AI assistants like Google Home, Siri, and Alexa. There are already over a billion Google assistant devices in homes, offices, and other spaces and that will only grow exponentially. These technologies have incredible capabilities and help us do everything from complete mundane tasks to provide us with timely information. Want the latest news from NPR? Need directions to get to a friend’s place? Want to find out how cold it is outside for your morning run? Ask Google, Alexa, or Siri.

These technologies can perform many convenient functions and are becoming increasingly accessible, but are we assessing how they’re unconsciously shaping our understanding and knowledge of the world? Are we equipping our young people with the skills to recognize the influence of these technologies, question their authority, and push back?

Malcolm X once said:

“The media’s the most powerful entity on earth. They have the power to make the innocent guilty and to make the guilty innocent, and that’s power. Because they control the minds of the masses.”

AI assistants and other technologies are no different than the media or our education system, they’re an extension of this apparatus and wield bias and influence through power. Safiya Noble, Ruha Benjamin, Cathy O’Neil, and other scholars have thoroughly documented the many biases, racist and sexist in particular, perpetuated by emerging technologies. In spite of this scholarship, emerging technologies are positioned by the technology industry as “neutral” and when there have been incidents of bias, they’ve been written off as innocent “glitches.” Joy Buolamwini, Timnit Gebru, and other prominent computer scientists have uncovered how these technologies and the datasets they use are designed and curated by human beings who encode their own biases, values and identities.

If we return to our AI assistant and Christopher Columbus example, it’s possible that the person(s) who created the algorithm designed it to pull up the top Google search engine result fueled by advertising dollars (VOA News), without taking the time to critically review the information for historical accuracy; or they used autosuggestion; or they manually curated the dataset and believed it to be a good source of information for users, but we really don’t know.

“I don’t know how to respond to that.” — AI Assistant.

Unlike our nightly news anchor who we can tweet at, or our radio station where we can call in to, or the editor of our local newspaper who we can write a letter or op-ed to, emerging technologies maintain fortified black boxes. Actual people remain nameless and faceless and this prevents the creation of spaces for engagement or discussion.

“How does one escape a cage that doesn’t exist?”, Maeve (robot) from Westworld ponders in season three, and it’s a question that so aptly reflects this dilemma. The invisibility of how decision-making processes are designed and embedded in emerging technologies, and the perceived divorcement from human bias or error is what makes their influence so insidious. Many scholars cite how social trust and overdependence on technology prevents us from questioning these black box algorithms and data sources. We believe it’s not our place, we’re not the technical experts, we’re told it’s too complicated, we don’t think about it at all, or we’ve been indoctrinated into believing that everything in our Google search return is accurate and what we need to know (“just GOOGLE it!”).

AI assistants and other emerging technologies are a great case study for Foucault’s knowledge-power theory, positing that power is everywhere and pervasive, it is established through accepted forms of knowledge, scientific understanding, and “truth,” and few industries are better at upholding “universal truths” than the technology industry. As we saw in the case of the AI assistant and Christopher Columbus, these “universal truths” prop up dominant narratives which continue to oppress non-dominant peoples. Our consciousness and ability to rebel against these universal truths and dominant narratives is fundamental to dismantling structural inequity.

Why rebelling is even more important for K12 now.

AI assistants are increasingly being used as educational aids by young people to answer questions and fact-check their work outside of school, and these technologies are being positioned as tools to bolster the development of inquiry and curiosity. When schools are at their best, children conduct research on the web at school with the support of teachers and librarians, who are trained educators tasked with supporting them to build their information and media literacy skills. With adult guidance they learn how to evaluate a source, debate the content of that source with their peers and create their own content.

How are AI assistants and black box algorithms altering this dynamic, especially in light of the COVID-19 global pandemic? How might these technologies create even greater harm at scale in K12 education through the dissemination of misinformation and dominant narratives that are prioritized according to which private interest has the biggest budget for search engine optimization?

We at the Stanford d.school are determined to support educators, families, and children to participate: to see what’s not visible, question these technologies, and embrace the role of creator and decision-maker. We’re also determined to equip designers and technologists with the skills to reflect on their own positionality, recognize discriminatory design practices and inflict less harm. If we are serious about equity, we must thoroughly evaluate the implications of our work on society before and iteratively as we design, and those who might be affected should give the greenlight before we set our creations loose in the world.

Good intentions aren’t good enough. This is why we created “Build a Bot.”

A Peek Inside the Prototype: “Build a Bot.”

There are many layers of design involved in creating AI assistants, which include how we interact with them, how they select and collect the information they share with us, and what they do with the information we give them (yes, we give them information, sometimes we just don’t know it). In our prototype “Build a Bot” educators, families, and young people can design their own personalized responses to help requests and contend with the implications of various design choices. If you asked Alexa for directions, how would YOU want Alexa to respond back to you? That’s an intentional design choice that we as designers make and can change.

Educators, families, and young people can explore other design choices that aren’t always made very public but are shaping our society and future, and sit in the driver seat. As you build and craft your own AI assistant, or tinker with one you might have, questions this learning experience will provoke are:

  • If my AI assistant doesn’t understand a question someone asks, how should I design it to respond? What kind of questions should I create so that my AI assistant can answer someone’s question and truly be helpful?
  • When someone asks my AI assistant a question, where should it get the answers or information from? Newspapers, Twitter, Wikipedia? Is one place better than another place? How do I know? Whose perspective is this information positioning and is it propping up a dominant narrative or misinformation? Should I pick a source that presents multiple perspectives?
  • Should my AI assistant be able to listen to every conversation someone has, and is that conversation safe? Where does that data go and should it be saved? Should someone else have access to it?

These cards were inspired by the early work of Josie Young on the Feminist PIA (personal intelligent assistant) standards, and the wonderful work of the Feminist Internet and Comuzi on F’xa. This prototype along with more information can be found here.

A number of popular AI assistants were recently updated with data sources to show support for the Black Lives Matter movement (“Black Lives Matter”), and if a person asks “do all lives matter?”, they all express some version of “saying ‘black lives matter’ doesn’t mean that all lives don’t. It means black lives are at risk in ways others are not.”

While it’s encouraging to see this response to the shifts in global discourse around policy brutality, what was the response to this query a few months ago? “I don’t know”? “Yes”? It shouldn’t take a civil rights movement to prompt the technology industry to simply do the right thing.

My colleague Manasa Yeturu and I started re-phrasing the popular slogan “design starts with the user” to “design doesn’t start with the user, it starts with YOU” which not only includes examining our own positionality, but all that we don’t know and all the ways in which we fail to act, fail to learn more about others, and fail to prevent harm.

Everything we do and everything we don’t do is an intentional design choice.

Join us.

What questions do you want to discuss and debate with AI assistants and their creators? What new help requests should we add to this deck of cards? Tweet at us @k12lab.

ADDITIONAL RESOURCES:

--

--

Ariam Mogos
The Startup

⚡️Pan-African #Educator #Technologist #NatGeoExplorer passionate about #learning #play #liberation 🎊 making a mess @stanforddschool 👉🏾she/her