Culture meets Artificial Intelligence

6 Reflections on Stanford’s Culturally Relevant AI Summit

Kursat Ozenc
Ritual Design Lab
6 min readDec 11, 2018

--

This past October, we attended the Culturally Relevant AI Summit at Stanford, organized by Media X. The day comprised intriguing lightning talks and discussions. Here’re our six reflections from the AI meets Culture.

1. Artificial intelligence is a mirror of our biases and vulnerabilities

Rama Akkiraju from IBM Watson gave a talk on the fairness of AI models. Her session was also like a 101 introduction to AI with her comprehensive maps and representations. She emphasized the fact that the existing artificial intelligence models are mirrors of our existing human-to-human biases. While releasing these models, it’s critical to mention their biases up front so people who sign up to use them know their limitations and side effects. She then gave an excellent overview of the explainability of AI models. There are two paths to the creation of an AI model. The first is a black box with no clear rationale for why the algorithm makes a certain decision. The second model is a transparent model where there is clear reasoning behind algorithmic decisions. The latter one is harder to build, however, it might be preferable, especially in domains where the stakes are high and consequential, such as in law and healthcare. Our key takeaway was that explainable AI is a wise choice, and black-box AI is not the only choice.

2. AI is an opportunity for a more inclusive culture

During the Insights from Embodied and Non-embodied AI Panel, industry experts (Mariana Lin, Cory Kidd, John Ostrem, and Annabell Ho) talked about their practice and research. For us newbies, the distinction between embodied and non-embodied was useful. Embodied AI means physical robots in flesh — bolts, and nuts, whereas non-embodied means conversational and virtual robots. The distinction is useful as their design and development run in different modalities. Marian Lin’s talk was interesting as she worked in both modalities, with Apple’s Siri and recently, Sophia of Hanson Robotics.

According to Marian, both kinds pose their own challenges. In Siri, she observed people can impose whom they interact with, whereas, in Sophia, people interact with a physical robot with clear cues of gender and personality. In Siri's case, her insight into the power of universal language was noteworthy. Regardless of language (Siri speaks 36 different languages), people interact with artificial intelligence along with a spectrum of humorous and functional language. What humor and function mean can differ, but the human need for connection is similar across cultures and geographies. Later in the panel, she put out a strong vision for AI. With AI, humankind has the chance to create a pan-global culture that’s more inclusive and open to friendship and diversity. Somehow her beginning quote for her talk from Thoreau made more sense after she remarked on culture.

“The language of friendship is not words but meanings. It is an intelligence above language.”
Henry David Thoreau

3. AI Needs a foundation, a core, of ethics and values

At some point, the discussion got to the need for a foundation for artificial intelligence — where algorithms rely on ethics and values to make wise decisions. In such a scenario, maybe AI systems wouldn’t serve whoever asks whatever from them. At that point, one panelist reminded the audience of the ramifications of Facebook’s algorithms that prioritized profit over fairness during elections. In a more grounded AI scenario, human-to-AI interaction wouldn’t be a one-size-fits-all conundrum but a more nuanced one.

4. Hey, AI, be mindful of representation bias

At least two presenters mentioned this famous paper, called Most People are not Weird by Joseph Henrich, Steven J. Heine, and Ara Norenzayan. The paper’s main argument is that most of our existing research is based on Western people who are rich, educated, and democratic. And the Western world tends to make generalizations about the rest of the world based on their own perceptions and understanding. There are also striking statistics to support this observation. 96% of research participants in the scientific world are working with participants from the WEIRD world when they are only 12% of the world population. This means our generalizations are biased and not representing the rich diversity of the rest of the world's population. Perhaps there’s nothing new to this since Edward Said’s seminal piece, Orientalism. Presenters highlight the danger of AI following the mistakes of modernism and orientalism, encoding and engraining the centuries-old biases the emerging intelligent systems.

Most people are not W.E.I.R.D, which makes a strong point for more inclusive research for the future of AI

On a separate brainwave, culture is a beautifully weird thing. When we call something weird, especially one that we are not necessarily familiar with, we are challenged by the unfamiliar and the new. As designers, we embrace this weirdness and quirkiness. It gives us paths to reach the new, the unique, and the beautiful.

5. Human-robot rituals are in their infancy

During the Q&A, one of us asked, "What is your take on human-machine rituals as rituals are one of the strong embodiments of human culture?” The reaction to this question: “We are not there yet, to intentionally design human-robot rituals.” It made sense up to a point, as panelists earlier were discussing the challenges of getting to a threshold where machines have a good understanding of context and human needs. Without that basis, designing rich interactions like rituals might be secondary in priority. On the other hand, without developing a lens for rich interactions earlier in this infancy, it might be difficult later to break the established norms and design culturally relevant interactions. Then Cory Kidd, one of the panelists, mentioned that they have rituals of some kind; they call them signature interactions between their Catalia robot and the patient. We noted this in our nomenclature as one of the keywords to look for in future conversations.

6. Ritual design can be a way to articulate culture design

Later in the afternoon, the importance of a ritual lens did a full circle with the hands-on workshop session. The MediaX team prepared a design exercise centered on images from various rituals from all over the world. The challenge was to design an inclusive future experience to improve the existing state of the people in the images. We weren’t given any clue where and when these images were taken. It was intriguing to listen to people’s readings and interpret these rituals, from native Indian tribes to Mexican communities. Images are powerful tools for reading and writing a certain kind of culture. Without the context of these rituals, we had to rely on our past experiences, biases, and (mis)perceptions. In a way, exercise put us in the shoes of an artificially intelligent machine and helped us empathize with the machine.

This workshop can be improved on another brainwave by a series of scaffolding exercises. For instance, there can be two rounds of design exercises. The first one is without a clue; the second one is a clue about the cultural context of the images. This could help participants see the strike difference and change their mindset towards designing with cultural references.

We’d like to thank the summit organizers and participants. If you are curious, you can watch the sessions on MediaX’s Youtube channel. We’d love to see more of these kinds of summits, where industry and research get together and tinker with the future of culturally rich and relevant human-robot interactions.

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it.

--

--