Building Trustworthy AI With Mozilla

In this inspiring episode, we are joined by Temi Popo, Chenai Chair, Kathy Pham, and Wiebke Toussaint. We talk a bit about their backgrounds and how they got started in this space, how they handle difficult situations around fairness and equality, the importance of having people who experience inequalities as part of the solution, the different ways Mozilla is building trustworthy AI on a global scale, and so much more.

The following talk was recorded at a Women in AI Ethics event.

Temi Popo: Welcome everyone to the Building Trustworthy AI with Mozilla Panel. Good morning. Good afternoon. Good evening. Depending on where you’re joining from. My name is Temi Popo, I am a Program Manager at Mozilla Foundation and I’m based in Montreal, Canada.

On this panel, we will explore different ways in which Mozilla and the Internet Health Movement are making the web a healthier place, whether it’s through Building Trustworthy AI Working Groups, the Responsible Computing Program, or our growing focus on African innovation with the Kiswahili project, Mozilla is mobilizing builders around the world to create a more ethical and more inclusive technology.

Today we have Wiebke Toussaint. She is a member of the Building Trustworthy AI working group at Mozilla and a Ph.D. student at the Delft University of Technology in the Netherlands. We also have Kathy Pham, Co-Director of Responsible Computing at Mozilla and Chenai Chair who is leading Mozilla’s African innovation efforts.

So welcome to you all. And thank you so much for joining. So I’m going to actually pass it over to each of you to introduce yourselves. It’d be great. If you could tell us where you’re based and what work you do at, or with Mozilla. And if you can also just give us a bit of background into what led you on this path, to the work that you’re currently doing, that would be great. And actually, we can start with Chenai.

Chenai Chair: Thanks so much, Temi. It’s such a pleasure to be on this panel as staff with Mozilla. I’m dialing in from Johannesburg. So it’s late evening for me in South Africa. I consider myself a feminist in technology. So I’ve been working on the technical space from trying to understand the issues around the digital divide in the African continent, from a research perspective and all of my work has taken on that gender understanding with the principles guided from the feminist perspective, making use of Feminist Principles of the Internet.

My journey into actually joining Mozilla Foundation as staff. Last year, I was a Tech Policy Fellow who specifically tried to understand the intersection between AI privacy and data protection from a feminist perspective, resulting in an online resource, curated called, which aims to actually map out gender issues in relation to AI privacy and data protection and thinking about like what can be done to ensure that we have innovation that works taking into account the gender inequalities and social contexts.

Currently, my role at Mozilla is to support the work around African innovation. And today is exciting because we actually launched the Common Voice project focused on building the Kiswahili dataset. So that’s part of that initiative of breaking down the language barriers found in tech.

Temi Popo: Very, very interesting background. Kathy, can you tell us about yours?

Kathy Pham: So thrilled to be here. I am Kathy Pham. I am calling in from Boston, Massachusetts in the United States, and I am a trained computer scientist who held various roles in product management, data science, and engineering in different companies of different types. And that’s related to the work I do now and why I’m here.

So I co-lead the Responsible Computer Science Challenge in responsible computing Jen beard at Mozilla. And I think what really led me here with I started out very much in this bubble of tech is going to change the world and tech is going to be used for good and do no evil and all these mantras that were cliches now that we know that is in tech. And that was probably for about a decade of just this deep belief, that tech is just going to do so much good in the world. And there is definitely truth to that, right? There are ways in which technology could empower certain movements that could empower people. But then there’s a darker side of it as well.

And I think back on my training in computer science and just the culture of tech, we’re so often blinded by it. We don’t see it. We’re not trained to think about unintended consequences. We’re not trained to bring in different communities around the world to consider when we’re building technology.

And I think it’s about a decade in the private tech sector, about four years in the federal government and the US building out a new tech startup inside government, and then being around so many brilliant scholars of history and anthropology and race and gender and philosophy and so many others at the Berkman Center at Harvard to really think, oh my gosh, tech has to be interdisciplinary. It has to respect different fields. It has to respect every environment that we go into that we build for and understand how to build with them and how to empower communities instead of just parachuting our technology in and having tech colonialism of sorts. And then that’s why I’m here. I have the pleasure of leading this Responsible Computing Challenge, to ultimately change how we think about computer science and computing.

It’s an effort that is across many disciplines. It’s an effort that computer science can not do alone. We can’t just think harder with only computing people in the room and think we’ll come up with some solution and to get it, to do it at Mozilla, a place that thinks a lot about movement building so the teaching and pedagogy side is one part of it is what we’re focused on, but you’ll hear it from so many others have so many different projects you work on with pushing the industry to do better, researching different harms that technology has caused, researching benefits. And then we have the MoPo, the corporation side, which builds actual tech products. And we get to see so much of this in many different vantage points that I think really enriches the work that we do instead of just only taking one lane in our thinking.

Temi Popo: We have some really heavy hitters today and that was an amazing background as well. So next up we have Wiebke and I love to know what brought you to the Building Trustworthy AI Working Group.

Wiebke Toussaint: Thanks so much, Temi. Chanai I have to admit when you said Johanessburg, my eyes lit up and I had a short moment of envy because South Africa is home and I always miss the sunshine and the people and the smells and I guess the culture and the atmosphere. Well, South Africa’s home, life at the moment is in The Hague, in the Netherlands because I’m doing my Ph.D. at the Technical University of Delft.

As a little bit of a background, maybe in contrast to Kathy, my bachelor’s degree was in mechanical engineering.

Growing up in South Africa, I guess inequality is felt it’s felt because it’s architected and it’s been intentionally architected in physical infrastructure over decades of apartheid. And so kind of being an engineering student in South Africa I always felt that our curriculum hasn’t changed 15 years at that point after apartheid, we’re still doing physical engineering in the same way without rethinking these things. And so I spent a big portion of my twenties doing advocacy work in South Africa’s engineering sector to bring community development into the engineering curriculum. And I always saw education and especially university education as this entry point where you could change the trajectory of an entire generation of thinkers from thereon.

But my personal journey then led me from engineering to data science and AI. And as I left, I guess, as I migrated out of the traditional engineering space into AI, I was completely shocked to realize that the inequalities that I saw in engineering design were even more prominent in technology.

And so I think again if we construct that from having inequality being felt so physically to seeing it translate into engineering infrastructure and then to see it mirrored and maybe even amplified in tech, it’s kind of where the journey was.

As I started moving more and more into tech, I was looking for an opportunity to volunteer. In engineering, I had co-founded Engineers Without Borders in South Africa, and I spent a lot of time what I think of as MacGyvering trying to figure out how to make these things work out.

Well, for one to volunteer in a space that kind of aligned with my values in tech and in AI. And on the other hand, also to be able to learn from people that had been doing this for longer than me. And so when Mozilla launched a call for the Trustworthy AI Working Group last year in August, that was really perfect. And I thought, this is where I want to be and I joined the meetings, the meetups. And when there was an opportunity to suggest a project that we would be interested in doing, I put up my hand and did that.

Temi Popo: I’ll just follow up with you right there, Wiebke, because I think your project with the Building Trustworthy AI Working Group is really interesting and ties into a lot of what Kathy was saying about pedagogy and just changing the way that we approach teaching computer science and making sure that there’s intersectionality and people are trained to see the dark side of things before it’s too late. So can you tell us a bit about the Zen of ML, which is your project?

Wiebke Toussaint: Yeah. So the project that I proposed that we’ve been working on is called The Zen of Machine Learning. It’s inspired by The Zen of Python and The Zen of Python for those that are not familiar with the Phyton language, it’s effectively a set of design principles of what’s beautiful and by beautiful meaning, good Python code looks like. And the inspiration behind it is yes, The Zen of Python, but there is a philosophical aspect to it, which is that from my view, any technology that we built is made up of a whole series of design decisions and design decisions are always normative. There’s never a true and false there’s always decision-making. It serves some, it doesn’t serve others. That’s just the nature of built technology as opposed to science.

The idea behind the Zen of Machine Learning was like if we think of responsible machine learning practice, and we think of all the people that are teaching machine learning to themselves, so self-starters, how do we put together a set of design principles that really helps self-starters to think about responsible machine learning, not at the end of a course or not at all, but right at the beginning when they get started.

Temi Popo: And I think it really aligns with Kathy what you’ve been working on with the Teaching Responsible Computing Playbook. And I know Wiebke’s a huge fan of that playbook. So can you tell us a bit more about your work with that and how you collaborated with so many academics to get it done?

Kathy Pham: Yeah. As you’re talking about theories of design decisions along the way, that’s basically what so many members of this cohort are thinking about. So we’ve been so lucky to work with 19 colleges and universities to just really rethink what even is teaching responsible computing. And we also were very lucky to have a pretty interdisciplinary different group ranging from community colleges to public schools, to liberal arts universities, to Ivy leagues, to really challenge each other. And this playbook was really just meant to be all the things we kind of wish we knew when we started this journey of teaching responsible computing.

And what that looks like is how do you have difficult conversations? How do you bring topics of race equity and justice into the classroom? Especially if you are not from any community that’s ever been marginalized ever, and those conversations are really uncomfortable.

How do you pick which classes to even start with? These core classes that you take in computing algorithms, data, science, et cetera. And a lot of times those classes are still very much you learn how to build but that’s about it. And if you care about like society or human-computer interaction or user experience, that’s the extra elective stuff.

And so how do you weave that into each of these classes and the playbook kind of gets out, how do you pick which classes, how do you convince other faculty members? How do you manage your teaching team? So it’s really meant for other academics looking to do this work and weaved throughout that are examples of how these 22 colleges and universities have done them. How Buffalo chose different introductory classes versus senior design classes. How role-playing games in Atlanta played a big role? How a couple of universities work in an interdisciplinary manner, which is really hard in academia, across three different colleges and universities to come up with rethinking carbon emissions and AI and tech.

How do you make these lessons stick? Going from the first introductory class, your freshman year when these students are only 17 or 18 years old, which is very early in the adulthood of life, right? It’s taken some of us a few years or decades to start to really understand some of these implications of technology.

So how do we go from that to four years later, having some scaffolding and understanding and making it stick along the way so that by the time they graduate their senior design project isn’t something like, Yeah, yeah. I heard all those things but I just really want to build the next social media network without taking into account any of those considerations because they don’t stick.

And then I think the next part of that is, so then how does that look like in industry when they actually go to work? Because he wants them to think about, if they’re writing another piece of algorithm or do something, they’re writing a little piece of code. If they’re picking a data structure for gender, instead of going with a bully and that is on or off, they think of something much broader because it’s not just yes or no.

And there are always small decisions that make up the big thing that you ultimately see. So how do we help students get to those small decisions and then help faculty understand how to teach that, and have those conversations, and make it stick? That’s where the responsible computing challenge has been. It’s been such a brilliant, fantastic collaboration with about 20 colleges and universities to really rethink that topic.

Temi Popo: Thanks for sharing that. So Chenai, can you tell us a little bit about the Kiswahili project? How it started, what was the emphasis for it, and how it’s going?

Chenai Chair: Thanks so much, Temi. So the Kiswahili project started off as an addition to Kinyarwanda and Rwanda work. And, you know, just a little bit of background about Common Voice, it was just that idea of understanding the need to diversify and democratize the voice space, because clearly what’s out there is, the data is held by big tech companies. It’s not open source and there’s a sense of more data extraction than it is contribution by the community and ownership. So Common Voice then was started as this idea of diversifying the space.

Now with the African languages work. The recognition is that the big tech companies that are currently out there within the voice technology space, actually do not serve African languages as of today. So I might stand to be corrected, but there isn’t any of like the Siris, the Amazons, and the Google Homes that actually have an African language.

And then in addition to that issues around bias, in relation to accent, where one has to change how they speak so that the system can actually understand what this. And also we know that it works better for men than it does for women. So these are things that we knew were an issue when it came to this voice technology but also understanding that as we move on to interacting with these technologies, speech is going to be very powerful.

The Kiswahili project was launched as a way of trying to figure out, could we serve underserved communities by creating the dataset on these languages and then having it open so that innovators, researchers, and other communities can actually make use of the data set to develop solutions for marginalized communities or for just generally thinking about the use of tech.

We were fortunate enough to have funding from the Gates, GSA, the German funding institution, and FCDO our UK institution to allow for us to actually build up this project over the next three years, which we’ll see in key staff, such as myself and also three fellows, which include a machine learning fellow who’s actually been working on the NLP community for a long time.

And then we’ve also brought in community engagement fellows because the important part about this is that you want communities to have a sense of ownership and you want to include them because Kiswahili is not like a homogenous language. It’s diverse in the way that it’s spoken, the accents, and just the understanding in terms of the particular context.

So we’ve made sure that we’ve got community engagement fellows who will be working towards collecting the data and working together to curate the data and also make sure that the dataset is representative. You know, that’s the part we’re thinking about in terms of like the issues around bias and discrimination right from the beginning in terms of the design. So that has been the background of the Kiswahili work and the motivation in actually doing it so that we address an issue that’s there and work towards seeing if it can be applied from the use cases that we’re developing and trying to understand the impact it will have in terms of the reach, knowing fully well in the context of limited internet access within the region, affordability issues, quality of connection that’s hampered by just having access to electricity. So that’s some of the things that we’re thinking about as this technology as a part of the solution.

Temi Popo: I love how you’re kind of thinking about all the different aspects that go into voice technology. I’ve always been cognizant of like the accent piece, but I really wasn’t cognizant of the male-female thing until I moved in with my fiance. And like, I feel like my voice assistant disrespects me daily because she answers to him and not me. So, yeah, just like the intersection of where all those things come in with the accent and the gender, and yeah, very cool work.

So can you tell us how you’re ensuring global inclusivity for Africa when it comes to your work with Mozilla? A lot of tech companies, you know, they have this global perspective, but that global perspective never includes Africa. So what’s your opinion on that?

Chenai Chair: As a big critic of any tech company, because of where it’s located. Like your address tells me how much you care about Africa. I think for me, the big thing, and even when I was joining Mozilla, the question’s always been thinking about the intentionality of the work.

Like if you are going to be engaging the African continent, how will you be doing it? And are you conscious of your own bias in terms of who you are as potentially a white man in the room who — African governments may listen to you better because that’s who they respect or at the same time are you also recognizing that even if you’re going to bring on African staff as a way to make sure that you’ve got representation, is the environment safe enough for us to actually fully engage and speak as experts on our own issues? Not to be brought in as the token higher into the room. So I think with Mozilla, what I found reflecting personally is that there is that intentionality in thinking about the work, but there’s also even in terms of like the design application of the solutions where it’s we will fund to support what is existent, not necessarily fund so that we have our own solution.

Even the partners on the Kiswahili project are fully aware of what it means to engage as funders in the technical space who have a particular project that they want to see succeed whilst recognizing that there are other issues and priorities that other people might have.

I always frame the conversation around AI within the African context that it has to be access plus AI, you can’t simply just talk about artificial intelligence or digital rights without thinking about it from the very beginning of the access perspective. So everything that you’re doing has to be connected on the spectrum. So that’s another way of thinking about that global engagement.

But I think a lot of it also is that Mozilla doesn’t have all the answers, right? So this is also that learning space of how are we going to do this? And then this speaks to the work that we’re doing as the African innovation team, where it’s about movement building and actually carefully planning how do we intentionally plan and engage and learn from the community that’s already there so that what we do is something that’s sustainable. It’s something that supports existing communities and builds up knowledge and capacity. And also later on designs to figure out like, how best do we support and how best do we need an exit room? How do we bring our strengths to support what’s already going on?

Temi Popo: So that’s a lot of head nodding as you were speaking. And I think that’s because all of you have touched on creating a space for underrepresented voices. So Chenai is quite literally doing this in Common Voice. But Kathy, how do you center underrepresented voices with your work?

Kathy Pham: So many different points around understanding access and also Mozilla not knowing all the answers and how we have to constantly learn and try. And I think a big part of that is having a culture in any of the companies or organizations we’re in that the moment we realized we made a mistake or there’s something wrong. Oh, we’ve missed a voice. There’s the ability to immediately pivot or immediately say, oh, how do we now put weight behind that and change?

I think the tech sector for a long time has generally been in the category of we’ll worry about it later, or not our problem, not worrying about it now, and then intentionally or even unintentionally at some point, leaving out underrepresented groups, then later on intentionally leaving out different groups because it’s too hard to either shift or change or at that point it’s too late cause you haven’t hired enough people to focus on a certain region of the world and you’re causing a lot of damage in those parts of the world.

So the first phase of the responsible computing challenge, it was meant to be a big — like an experiment almost to see is anyone even going to show over there? Any computing programs that even care about this?

So we focused first on US-based schools for a number of reasons and made sure we had criteria in our rubric in place, such that we had a cross-cut of different types of colleges and universities. So for example, when COVID hit, it was right in the middle of the challenge.

There were schools that just assume students were going to go online and get on Zoom, which sounds crazy for people who think about this, but is normal for people who don’t. And immediately there are folks in the cohort that were like our first thought was the center of students home with flash drives because they do not have internet where they are or are universities negotiating with the local towns to get internet access for our students where they live because they just live in certain parts of the country that have very little, little access.

And so I think that’s just a living testament to having some level of different and diverse voices in the room is not just to say, you know, oh, we have the different kinds of leaders? It’s like the day-to-day interactions people have with each other. So there’s no longer as a hard lift to go and try to understand a particular concept. There’s just someone else around to help us understand something that people have maybe have these lived experiences and someone else has like as a case study that they just study or think about in the classroom. I would say that areas we have to expand on and the next phase of the responsible computing challenge is to think on how to bring on a more global community and then specifically how to support HBCs, historically black colleges, and universities in the United States, as well as potentially other minority-serving institutions. Right now we have a global community of practice for teaching responsible computer science. So it’s a community of people across the world. A lot of them are computing faculty, but also in other disciplines as well, who care about or think about what teaching responsible computing is, and we’re looking to continue to grow and expand that. And that’s on our responsible TS website, as well.

Temi Popo: Thanks, Kathy. So, Wiebke, you’ve also been doing some head-nodding. So feel free to add in any thoughts that you have. But my question to you is specifically around, you know, your focus on self-learners with the Zen of ML. So, someone, you know, pursuing a Ph.D. in machine learning and having gone through academia to get your knowledge and skills. Why do you think it’s so important to teach self-learners responsible ML?

Wiebke Toussaint: Thanks so much. Yeah, the head nodding was just enthusiasm because I loved the way that Chenai was framing things. And I think especially the access plus AI speaks so deeply to me, my previous life and engineering, I used to work in household energy consumption, residential electricity.

I think quite often. And when we have conversations around AI, we forget that a huge portion of Africa’s unelectrified and that they’re always multiple problems that have to be solved together. And also that the budgets aren’t always like endless. So if we’re solving for AI, does it come at the cost of electricity and water and road infrastructure and housing? But yeah, so that was where the nodding was coming from, from the access plus AI. I think I’ll definitely carry that with me.

On the self-learner component. So I think the inspiration from that actually also came from Africa’s flourishing machine learning community. And so if you look at computer science programs in universities, at least in South Africa, it’s like it’s maybe in relation to traditional engineering, a lot of people go to study traditional engineering. And interestingly enough, also your demographics and traditional engineering are mostly very reflective of the country’s demographics, which comes because of the funding sources available, it comes because corporates for many years have been giving bursaries to fund the education of students and so on.

But in computer science that hasn’t been the case. Maybe, because we don’t have so many tech companies, but also because the tech companies that are there, they don’t give bursaries, and people probably just don’t have so much exposure. I know I didn’t and I probably had more access or not probably, I had more access than most people in the country.

So there’s a pretty, relatively small community of computer scientists in South Africa in relation to traditional engineering. But there’s been a massive movement of people. Self-learning or coding through programs and with machine learning now, both in South Africa and on the African continent, again, a huge number of people have really taken to the resources that are available online, to innovate locally on the ground and find ways of using machine learning, to solve their own problems, which I think is amazing.

But together with that is then, and maybe just to say, it’s not just Africa, I think at most space, we had a couple of people from different parts of the world from the Philippines, from Brazil. And again, all of those ground-up movements and ground up, bringing people together, bringing self-learners together to learn machine learning.

And again, I that is a wonderful part of information technology, but also of machine learning that it is accessible. And once you have a computer and access to electricity, that should be great but on the other hand, the academic machine learning community is more and more realizing, oh, well, academic and maybe corporate that the issues with machine learning. So it doesn’t work the way that we expected, the people who get fundamentally excluded. And coming back to self-learners, self-learners often end up being more in the space of application rather than theoretical technology development. And so the question really comes is how do we bridge that space? Because people that are wanting to use machine learning technologies for applications to solve local problems, they might not expect that there are certain glitches in the technology that I guess you wouldn’t expect.

So let’s say we know the models are often pretty large. Then you can compress them to make them smaller and put them on devices. But if we have feature phones and smartphones in Africa being used by people, wanting to use machine learning models, a lot of them have much lower resources than for example, your newest iPhones.

And then starting to say, okay, how do these models change when we take them from bigger to smaller models? Who starts getting excluded? Like, do thresholds matter? Like, are we represented in the categories of the features? So I think the inspiration really came from seeing Africa’s rising community of self-learners but also realizing that there is this gap between people self-learning for applications on the ground.

And on the other hand, kind of the flaws and the gaps in the technology being recognized and yeah that’s a question of saying how do we communicate the one to the other?

Kathy Pham: Can I add something to that? What does it look like when there’s a very lucrative field that we’ve now created a world in where only certain types of people can study and then get jobs in those fields and be in like this global minority that builds for the rest of the world.

And that field is, right now, computing. Also Dr. Jasmine McNeely the other day had reminded me that we so often talk about participatory design, which is great and how do we build more inclusive design and bring communities to the table, but also a very important conversation is how do we just empower people to build for themselves and not have this view of us and the west building for the rest of the world in a more responsible way, which carries with it, its own complexities as well. So thank you for bringing that up.

Temi Popo: Thank you, Wiebke, for that answer and great add Kathy, but I wanted to talk a little bit about community. Because it seems in all of your work communities, quite central to how you create, and it seems like your communities are mostly women-led or at least have a strong female horse at the home.

So I’m curious to learn from each of you, what you think is most impactful about building technology through community and what it feels like to lead the communities that you do? And because that way Chenai, cause I also wanted you to just add a piece about the feminist AI work you’ve done before and talking about bringing feminist perspectives to technology.

Chenai Chair: Thanks, Temi. I guess I will pull my answer together in that reflection because in thinking about feminist principles, it’s really about thinking about context and community.

So my work in terms of like bringing in a feminist lens into the AI work really was inspired by a hodgepodge of feminist approaches that have really been coming into the forefront right now when thinking about tech.

So you work around data feminism by Catherine Ignacio and Lauren Klein. The theories around intersectionality, Kimberlé Crenshaw and Patricia Hill Collins, as well as Dr. Sylvia Tamale, who actually wrote a book on Afro feminist futures. And it was talking about data and intersectionality and then also Afro feminist futures.

This is a recent project I did with my colleague where we’re trying to understand how feminist movements across the African continent could make use of data. And initially, it was really thinking about like digital data within the tech space. But what we found out was like all data in terms of like, you know, your traditional, quantitative and qualitative data onto your digital data.

So bringing in a feminist perspective into my work has really been about moving away from centering the technology to actually centering the communities. You know, it’s sort of like, I started out in this space from an ICT4D Information Communication Technology for development and the biggest criticism around ICT4D was, it was looking at the solution from an ICT perspective and asking everybody to come along with it.

So we give all the women pink telephones and that’s the way we close the gender digital divide, and what the feminist principles and way of thinking was more so, how do we center it in a way that recognizes the inequalities that are already there? Like the work that I was talking about, that should be done within, the engineering committee.

Recognizing how that was already placed in those structural inequalities and then thinking about it more so from a community perspective. My cultural background, the way that I have interacted in society has been from a community perspective. It’s been above being raised by the community, even like the philosophy of Ubuntu, from like the cultural energy I belong to is, “I am because other people are.”

So then when you come into the tech space, it seems as if communities completely disregarded it’s about the individual. It’s about, you know, like that idea that Kathy talked about, like making sure that are particular kinds of people who succeed. And if you don’t fall into that bracket, You’re not going to be in it.

I’m a social scientist by training who needs to actually be in this technical conversation is because there were people who were willing to move away from just having someone who studied tech in the conversation, but actually thinking about it from like an intersectional and ecosystem perspective.

So those feminist guidelines and the Feminist Principles of the Internet were something that was done by APC Women’s Rights Program. So it’s a collective movement. Everybody contributes to it and they get redesigned after a couple of years, as a framework.

Centering community in that instance is more so about thinking about like, who am I designing? Am I designing with them? Am I thinking about putting together a solution where it actually really just ends up being, it sounds good on paper, but when actually applied, it does not make sense in that particular context? And also thinking about, am I supporting what’s already existent?

So when I think about community, I think about it in that way, but I also think about it from that position of what rights do I have to engage with this particular community. And that I think is more so that responsible part about the technology in terms of like it allows one to not just come in and say they don’t have electricity so I’m going to give them electricity when maybe there’s a reason why they are thinking about like electricity or trying to solve it as mater should have issues. So in terms of thinking about the last part of the question of like these communities being led by women or leading this initiative of women, I think it’s also just recognizing that it’s a challenge because it’s exciting, because of course you’re in a space that empowers you and recognizes that you can do it because I’ve got my Mozilla hat on. But the reality is that it’s still working within patriarchal contexts where some people are going to prefer to speak to a man than to a woman like your AI system that you’re talking about, Temi.

And also just recognizing that there are certain cultural nuances and practices that may limit how people actually interact with the knowledge that you develop. So at the end of the day, it’s working within the community to understand how it operates, but actually also working with that community to break down patriarchal stereotypes that limit the participation of women and designing in a way that recognizes that you can’t just simply ask all of the women to come to a Town Hall. Sometimes you have to because then they need childcare or you have to convince their husbands. So how do you navigate those kinds of systems that are in place that affect how you actually engage with our community? So I was just thinking about women because that’s my favorite community, but obviously, we always have to think about like the barriers and limitations in engaging with community.

It’s a lot of fun, but the true reality is that there are barriers that we need to address when we engage in a community, especially around power.

Temi Popo: Very insightful. Thank you, Chenai. So I’ll pass it over to you, Kathy, to talk a bit about building a community.

Kathy Pham: I really want to pull on the thread of communities and power and the intention it takes to build community.

I’ll speak about it in terms of the responsible computing initiative. There are so many power dynamics within all communities. If you have a community of more than one person, there’s going to naturally be some kind of power dynamic there, whether it’s because of academic standing or race or gender or something, there is some implied.

And to ignore that and say, we’re just going to come together as community, or we’re just going to interview a group of people would be shortsighting and limit what we’re looking to get out of the community. And so what that looks like in the, at least the tech and responsible computing space. I’m building on what Chenai said, is in the tech space, there’s notoriously been hierarchies for engineering versus non-engineering.

In some tech companies, there’s literally a field in the HR system that labels people as engineering or non-engineering. So everything else that’s not engineering is none. There were jokes that some tech companies in the past that you’re either a software engineer or you’re support. There are jokes that people have at university where it’s oh, you are an engineering computer science major, you must be studying all the time because it’s so hard. But if you’re a business or international affairs or history major, you must have time to party all weekends. And it’s these small seemingly joking things that they’re not jokes. They build into the fabric of the culture of ultimately who ends up getting to make the decisions on all those product launches, who ends up deciding who the next big hire is, who has a powerful voice in the room?

And so I think a big part of community building, whether it’s with the challenge that we run or any tech team out there is to bring these different perspectives to the table on completely equal footing. So it’s not like the engineering team gets to make all the decisions and then maybe they’ll pull in someone that understands like deep historical context and race and gender issues as like in the side, just for consulting here and there. It’s how you bring people in on equal footing and have conversations over and over and over again so that they’re weaved into everything that we build. And that takes very intentional community building. It takes having sets of values. It takes understanding the respect we have for each other. And when we recognize that there are shortcomings on the team or aspects that are missing in the community, figure out how to get those voices in the room.

And Chenai talked about, you’re gonna ask a bunch of women to go to a town hall and they don’t show up. Well, are you paying for childcare? Are you looking at dynamics on maybe in some cultures, they have to get permission. There are so many different factors that we have to take into account in any kind of community building and to ignore that, of course, you can ignore it and still have data and results, but then now you have a very narrow set of data.

You have a set that perhaps include only certain people who can show up to the table to work in that field. Only a set of people who can show up for your interview that day, because they can’t. And so I think a big part of that is just very intentional community building. And I’ll share one anecdote that’s a bit outside of this, but I was in a meeting once where someone brought to the table, this concept of, you know, what do you think if we just start using Google street view images, and we had machine learning on it to determine a safety score for a neighborhood.

And like, they were so excited about it. Cause they’re like, you know, people are complaining about like unsafe neighborhoods so we’ll just use machine learning and we’ll decide on the safety score. And pretty quickly with people in the room with history and race and anthropology and other backgrounds, there were questions like, have you heard the Broken Windows Theory? You can’t really judge a neighborhood by broken windows. And I think one question that someone asked that really got to the heart of it, Well, safety for who? And that person had just never thought of that concept. That even the term safety means something different to different people. And when you have a homogenous team, the definition for safety might be one thing. And that completely changes the way how we build technology. So, I think in summary its community building is intentional. We have to recognize power structures and we have to build in scaffolding to put the power dynamics in check or else no matter how many people you bring together, some of those voices are either going to be drowned out, they end up leaving the companies, they don’t show up for interviews, they can’t make it, et cetera, et cetera.

Temi Popo: Very important points on being intentional. Thank you, Kathy. Wiebke, know with the Building Trustworthy AI Working Group, you had to like assemble a team where it was kind of self-organizing. How did it come together, especially with people from all different parts of the world that you had never met in person?

Wiebke Toussaint: Yeah. The trustworthy AI working group definitely helped a lot with that. I think there’s always someone who can be your aggregator and bring interesting people together and from there you can kind of sub direct people into different projects that they’re interested in.

I think what was a challenge for us particularly, and I still see that as one of the core challenges of let’s put it more technical, a bit like bridging the participatory and the technical aspects of technology.

And so we had a lot of people that were really enthusiastic about the Zen of Machine Learning and who really felt that responsible machine learning was important. But who came from different backgrounds. So there was from software engineering to ethics and philosophy to, yeah, I guess community buildings, people with two laws. So people with these different backgrounds wanting to contribute and the question that comes is how do you build, design principles that are a valid product, even if it’s just this stuff, design principles, how do you build a valid product, build an expertise while also being inclusive to the people that are around the table? So I think that to me is…it’s a really interesting question of bridge-building. How do you bridge these spaces between the technical and the participatory?

The way that we ended up adopting, I guess, to who was in the room was also saying, given the people that have arrived, how do we then not stay stubborn about wanting to do one particular thing, but how do we start drawing on what the people that are wanting to contribute to this are wanting to see from it?

So how do we do it in a way from seeing design principles as the outcome and how do we then start turning to the process and rather see that as the value? So the conversations that are being had, things that are being learned. And I think that’s something I’m starting to realize more and more is that when we want to be inclusive, often it starts being valuable to turn away from being stubborn about the outcome and start being much more focused on the process.

And the process being willing to adjust pace, being willing to stop, being willing to turn around, being willing to kind of go with the ebb and flow of things. And at the end, kind of stuck with what you have and saying, Hey, what do we have? I’ll be happy with it. And my experience so far with a couple of projects I’ve done, I’ve usually stuck with being like this wasn’t what I thought it would be, but I think it’s actually better.

Temi Popo: You’ve all spoken about structural inequalities in different ways and how technology can either exacerbate these structural inequalities or in some cases can mend them. So I’m very interested to know from the work you’ve done, what structural inequalities do you think ended up being exacerbated by certain technologies and where do you think technology can truly mend them?

I know Kathy, you have an entitled chapter on justice and equity in the Teaching Responsible Computing Playbook so I’d love to hear about that.

Kathy Pham: There’s so much to unpack, I think, on the dual edge of technology, right? There’s probably, in some cases you can find where video cameras overly surveilled certain people, but at the same time also now bring to light certain injustices.

There are technologies where you’re like, oh, you’re able to connect families. But at the same time now, by connecting families, you now propagate misinformation or about like different communities of people that later literally do massive like in-person harm.

And so I think in general, any tech that we build if anything, it’s an understanding of the different ways it can be used.

And so often we build up the lens of here’s all the good it can do without thinking of any potential side effects. And I think the best we can do is build in checks for if something comes up that does harm to us to do something about it. There are social media companies where it took them decades to finally have something in place to mitigate some of the problems. There are so many cases of that’s what it looked like. And there are common practices in the tech and engineering, whether you’re red teaming or you have dev ops or psychological disability engineers, where if a website goes down, like on the technical sense, it’s just the technical infrastructure fails. There’s a whole team of people that swarm on it because you understand that you just, you have to do something about it. So what’s the equivalent because you can’t predict everything. You can’t predict every security breach. You can’t predict every failure. So how do you build into your culture and your team when something happens that’s in the cultural, less tangible sense? How do you switch around? Because almost every technology has some dual-edge, right? And on one hand, government websites to sign up for vaccines or social services can help groups of people who can’t go in person or don’t otherwise have certain access to sign up for vaccines. On the other hand, if you build a website that isn’t accessible for many people, now, there are a ton of people who can’t get access to vaccines.

So I think there’s a lot of complexities around understanding different uses, understanding different communities we’re building within for, and then really at the end of the day, have something on your team and in your culture and in your values so that when something goes awry, you have a way to address that.

And that’s not the first time we were like, who are we going to hire? Who are we going to bring to the room? We don’t know. Oh my gosh! And have that ready.

Temi Popo: Thanks, Kathy. Wiebke? Chenai?

Wiebke Toussaint: I think what would help a lot is to start moving away from enough viewpoints. And I think the moment we stop thinking that there’s only one company that can solve something or anyone technology that can solve something, but that we realize that problems are almost always, but probably actually always localized, contextualized. And that having many solving something and many solving, same things in different ways to suit different people in different kinds of ways probably makes us all richer, I think would be a great way forward.

So yeah, I think lessen monopolies. And I think for AI and specifically, I think having less magical thinking would also help because then we can start treating AI like technology, not like magic, which means we can fall back on a lot of technical knowledge that we’ve built up over decades and we can start really being more rigorous around how we tackle it and how we scrutinize it.

Temi Popo: Yeah. So Chenai, do you have anything to add?

Chenai Chair: I just wanted to quickly add or reemphasize the point that Kathy talked about. Like when you’re talking about inequalities and trying to address them, it’s about having the people who experienced these inequalities as actually part of the solution, but also part of the solution in a way that they have the power to actually shift the conversation. Because often what happens is we might think about inclusion and then bring in people into the room, but they actually don’t have a budget, for example, if they’re in an organization to shift a conversation. They don’t have final decision-making power to actually bring in the people that are relevant into the space, because at the end of the day, what they are just representation and inclusion, but they actually do not have the structures in place to support them, to make the decisions for change, to happen

Temi Popo: 100%. Thank you so much for having us, Mia.




Join this global community of women working hard to save humanity from the dark side of AI.

Recommended from Medium

Why our AI is predicting startup success the best.

Chatbots for SEO: do you need one for your website?

Self-driving companies — it is nearly unthinkable. Or?

Alexander Hannerland, COO, and Theodore Bergqvist CEO at Turbotic. Provider of Autonomous Enterprise System — Delphi

Data2vec: The first high-performance self-supervised algorithm that works for speech, vision, and…

Smart Healthcare Solutions To Streamline Patient Experience

Smart Healthcare Solutions To Streamline Patient Experience

Developing the technology of tomorrow — NVIDIA Jetson™ Developer Challenge case study

Train your machine to train yourself!

AI in the Engineering Organization

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
WAIE Writer

WAIE Writer

More from Medium

The Draft of the EU Artificial Intelligence Act   — a perspective of AI and Legal Tech…

The AI “Big Picture”

Why your users don’t trust your AI?

Why Federated Learning can be a data sharing enabler?