Q&A with Transdisciplinary Artist Stephanie Dinkins

Stephanie Dinkins works with emerging technologies and community engagement to confront bias and inequity in artificial intelligence.

Future of StoryTelling
Future of StoryTelling
10 min readJan 10, 2020

--

Stephanie Dinkins works with emerging technologies and community engagement to confront bias and inequity in artificial intelligence.

In her work, she collaborates with communities of color to co-create more inclusive, fair, and ethical artificial intelligent ecosystems and to foster greater data sovereignty and social equity. Dinkins is a Creative Capital Grantee, a Sundance New Frontiers Story Lab Fellow, and the first Artist in Residence at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Wired, Art In America, Artsy, Art21, The New York Times, Hyperallergic, the BBC, Wilson Quarterly, and a host of popular podcasts have highlighted Dinkins’s art and ideas.

A lot of your work as an artist explores the relationship between culture and technology. How did you come to be interested in this topic?

I came to be interested in the intersection of culture, technology, and really race and aging because I started to talk to Bina48, a humanoid robot who appears to be a black female woman. Our interactions raised a bunch of questions in my mind about the technologies that we’re creating for the future, and how they intersect people, our cultures, and our values.

In a general way, technology has always been a part of my life. Even the toys I would request as a kid were technology-based. And so I’ve always been interested in technology, and my respect for and use of creativity culture comes from my grandmother, and the way our family approached domestic life and problems.

Speaking of your family, can you tell us a little bit about your latest project, Not the Only One, and how you came to create it?

The project is an attempt at building a multigenerational memoir of my family — a black American family — and that’s important because we’re looking at a specific way of encountering the world.

I am trying to produce a voice-activated deep learning chatbot — something that people can talk to — that holds my family’s history, is informed through oral history, and can convey some of our values to others in conversation.

The specificity of my family’s values and history feels important here because technologies, especially artificially intelligent systems, are starting to homogenize us as humans, so that we are the sum of all of our data — as opposed to a Venn diagram that intersects all of our data. The further we go along, and the more that we encode ourselves into these systems, we lose the nuances of who we are — the particularities of how we exist in the world and our ways of being. We start to lose the richness of human culture. And that’s a lot to give up.

This is obviously a personal project for you — you’re exploring these big ideas through very personal subject matter. Can you talk about your personal interest in Not The Only One?

Yeah, so, I’m trying to do two things. One is to preserve a set of values that I think are significant to my family, and to society more broadly, but that are disappearing. When I think about two generations down the line, and them not having deep connections to those values, it makes me sad. So I’m trying to find ways to contain that and have a chatbot that even a five-year-old can walk up to, and talk to, and maybe hear a glimmer of what my grandmother would’ve said to them from their own context.

And personally, I lost my mother when I was quite young, and I always say I would give anything to understand some of the ways that she actually thought and operated in the world. And I wondered what it would be like to have a system like this. So in a way, it’s trying to preserve some of that through those who are left here, now, who knew her.

Are there specific strengths that you focus on in terms of using a chatbot as a documentary medium, versus something more traditional like film?

I think about it in terms of what it means to be in conversation with something rather than having information delivered to you in a more linear format. Querying something, and then digging deeper into that query verbally, somehow attaches to our minds — and our bodies — in a different way. I’m interested in how we convey story in that way, and I’m also thinking about traditions of conveying information that are not based in the book or film or video, but in verbal communication, face-to-face transfer of information. That seems really important — the way that we embody the information. I’ve found that even when I listen to books, somehow I feel that information in a different way than when I read the words on a page. This way I feel there’s agency for the person asking questions, and there’s a kind of agency for the information as well.

One other issue that you’ve talked about is the ability for AI today to assist in re-creating consciousness. Can you talk about what you mean by that, and the opportunities and the risks that might come with re-creating consciousness?

The consciousness question is such a big one. It’s interesting to think and try to parse out — where does our consciousness lie? If it’s an amalgamation of our experiences and our memories, those are all things that are collectible and savable, right? So if you put that into a system that also has the ability to a) speak those words, and b) analyze that information and then offer it back out from its own point of view, what have you made? Is it a kind of consciousness? Is it a set of values? What is it that we’re getting at? I always wonder where that line is.

Bina48, who I first started talking to — the Terasem Foundation isn’t even shy about saying “we are trying to make a consciousness.” I, on the other hand, feel like I’m stumbling into a space where I’ve made this thing and, oh my gosh, it seems to be making its own way of thinking. But it’s still based on the information that it’s been given. So it’s quite interesting to hear it answer questions in a way that seems adjacent to the information that’s been given, but not direct. It’s not directly parroting us — it’s coming up with things that are just its own. In this iteration of the project, one of its fallbacks is to say “take it to the would-be.” We’re trying to figure out where that came from — that is not something anyone has told it.

I wonder if you’ll ever find out what it means.

Yeah, right. In some ways I want to know, in some ways I’m like, “Oh, no, that’s okay, ’cause it makes some kind of sense, and I’m happy with that.” But it’d be interesting to know where it came from.

Yeah, or it will be interesting to see if you start embracing that in your own dialogue with people.

Oh, I’m totally embracing it. That’s too good.

So, it seems to me that through your work you’re always exploring the ethics of AI. If you had to pinpoint some of the most important ethical dilemmas that AI is presenting to our world right now, what would you focus on?

You’ve got me at a great moment. I’m currently at a conference by the Institute for Human-Centered Artificial Intelligence at Stanford, and it’s really interesting because I just sat through two days of people thinking about ideas of governance, ideas of ethics, ideas of values. What values do these systems get imbued with? It’s really interesting to hear this idea of, “well, our values have to be within these systems,” and by “our values” they were basically talking about liberal values that are espoused at institutions like Stanford. That’s quite exclusionary. A lot of times liberal thought around people who are under-utilized is about tokenizing them. If we can’t get beyond that, how do we actually say that those values will be the best for AI systems?

It was also suggested that the UN Declaration for Human Rights be used as a model, which might be a good starting point. I think it needs to be picked apart and thought through very carefully to suss out omissions, failures, unbalanced application of ideals, and new requirements brought about by AI technologies seeking rights of their own. Once we start encoding these ideas into systems, and saying this is the value system, it’s much harder to change and go back.

What else do you think we can or should be doing to make AI more inclusive?

I think there’s work being done, but then I think there’s the harder work of figuring out how to bring people who are not usually in rooms into rooms and have real representation. That means not only what one might think — like people from different communities or organizations. We need to be asking, “Okay, who’s the guy on the street who also needs to be thinking about this?” and “How do we bring people in at the community ground level — into really thinking about what this means to them, and by extension, all of us?”

We need to be opening this up so broadly that people really have an opportunity to chime in. I’ve been thinking lately that a lot of this comes down to educating people. If we want people to be able to chime in, they need to be able to think through the potential impacts of these systems. We need to educate all of our people to be critical makers and guardians of our shared fate.

These are not easy problems to solve, but they’re problems that need to be solved by people. I know this is something else you’ve talked about, that rather than worrying that AI and robots will kill us all, we need to think about opportunities to partner with these systems. Is that how you see things continuing to unfold? Is that going to be the new work that we as humans do — assist AI?

I see it as partnering or finding hybrid ways to work in the world, but I do think this is all ever-evolving. So as we partner, we need to figure out what is it that we’re good at, and what it is that AI is good at. I’ve been thinking a lot about craft and storytelling, and those things that are less tangible. The less tangible things seem to be the human things. It’s almost as if we’re going back to a time when we were much more hands-on in the way that we lived in our world, when creativity and craft were much more integral to survival. Storytelling, music-making, those things that are nuanced, the things that humans have at our core, are in some ways much more important to us in functioning in a society that’s run by algorithms.

But then I can also see that there seems to be a certain creativity within the deep learning algorithm, right? So it’s interesting to think about how those parallel lines run. I think we’re going to have to figure out what human is and isn’t, and what AI is, right? Because that’s an important question.

With AI, as with many technologies, you imagine the best-case scenario and it’s wonderful, and you imagine the worst-case scenario and it’s horrifying. Are you optimistic about the future when it comes to AI?

My personal take is at both ends. I’m definitely concerned about what an AI-mediated future looks like with humans controlling it and not taking care of the biases within these systems, and not thinking about how value systems are not the domain of one community. These smart technologies are going to be able to work better, faster, longer than humans at some point. So, yeah, that’s all scary, but I also feel like this time around the Age of AI will arrive, or at least AI will develop, much further than it has in the past. Its deployment is exponential. So we can’t just fear it and stick our heads in the ground. I think that there are a lot of opportunities, and I think AI is still young enough that people can really get on board and figure out what they would like to do with this technology.

I’m an artist who is not a technologist; I’m not a programmer. I just stumbled onto these technologies and started asking questions. Yet here I am talking to researchers about the technology, and I am asking the same exact questions they are. We are trying to solve for the same thing. How is that not a great opportunity? It’s all about how we get together and help people not try to hoard their knowledge. It’s really important that we find languages that we can all speak and use productively together.

That’s a great point. That’s where you get back to inclusivity and the importance of getting input from everyone and not just the select elite few.

Yeah, and the select elite view is usually pretty much the same. It’s important because our perspectives are different, and we don’t all see things the same way. We see things quite differently — and we see different things. For example, even President Obama has had problems getting a cab in New York city, right? No, the reality is something different, and we need to understand, respect, and preserve those different realities.

Interested in joining the conversation about the reinvention of stories in the digital age? Subscribe to our newsletter and apply now to attend the annual invitation-only Future of StoryTelling Summit at fost.org/apply.

--

--