Ethical and Trustworthy AI Tools with Saishruthi Swaminathan

Join us for the final episode of this amazing series in partnership with IBM where we have invited Saishruthi Swaminathan, Advisory Data Scientist, AI Strategy and Innovation at IBM. In this episode, we learn about her journey from a small town in India to her current role bringing ethical and trustworthy AI to life with the open-source tools and toolkits that she has been actively developing at IBM.

Note: The views on this podcast are those of the person being interviewed and don’t necessarily represent IBM’s positions, strategies or opinions.

Mia Dand: We are so excited to have you, Saishruthi. You have a fascinating background. You have been at IBM as the Technical Lead and Data Scientist since 2018, but your journey started in a small town in India. So tell us, how did you make the transition from there with a degree in electrical engineering to artificial intelligence at IBM.

Saishruthi Swaminathan: Sure. I’m always excited to share my journey because if I share something that’s close to me, it can inspire even one person who is listening to this conversation, right? So let me start with this. If you want to use your full potential, if you want to use your energy to the fullest, I personally believe we need to pick an area where both your mind and heart can sit together. I’m getting into philosophical terms, but I think it’s really important. My one main goal when I completed my high school was to identify my passion and then turn that to my career so I don’t have to worry about this work life balance. I always saw this work as part of my life. This would only be possible if I do what I love. And the balance will organically flow when we know our sweet spot.

So I took electrical engineering and my master’s with networking as specialization. It was too abstract for me. I did my undergrad in instrumentation. I wanted to see if I really love missions, but then I couldn’t enjoy more and then I actually started working as a systems engineer where I found my passion for computer programming. It just happened one day. I was so afraid to touch the code and there was my manager. He was just typing so fast and I just loved the sound. I just love how the keyboard and all the sound that came from the keyboard. That’s how my passion for computer programming started. I got the confidence. And then I came here. I came to the United States for my master’s. I took electrical engineering with networking as my specialization. It was too abstract for me. I couldn’t relate. I know there’s some packet flowing here and there, there is some software, but I don’t see it talking to me.

So faster in my internship, I was exposed to the field of AI and data science. I enjoyed. For the first time I felt like there’s beauty in the data. It was speaking to me. I was able to interact. I wasn’t feeling alone like a person who’s an international student coming all the way from some very rural part of India. All of a sudden I got a new company. It’s just talking to me. I got so excited and I thought, okay, this is amazing.

So I just tried to shift my gear. I took a chance. I just think, okay, this is what I’m gonna do. I jumped into the field. Most of it for me is self study by community interaction, learning by myself. And there were two proposers who were my pillar of support. At this point I really want to thank them for putting me in the right spot.

So if I turn back, it still amazes me how many small decisions I took. Sleepless nights that I spent training myself. Being part of hackathons, competitions, community work, and finally landing in IBM. I was actually saying an important short incident, if you want to hear that. I just would love for you to know that.

Mia Dand: Yes, please do share.

Saishruthi Swaminathan: So I have experienced a lot of discrimination in my life based on my background. I have lost opportunities. I’ve missed opportunities. I used to take my resumes everywhere, and then based on my background, I have always faced the discrimination. So when I was in this field, I just took a pause and thought, “What if this data went to my AI system?”

I want to get into this field, but this kind of thought made me take a pause and it made me think about this trustworthy AI journey. I started analyzing more. I started asking questions to myself more — what it is, where it is going, what am I interacting with, et cetera.

And one other interesting thought, which I think you will enjoy or you will like is how many times I was gifted with stuffed toys. That’s my first question. As a girl, I was gifted most of the time the toys and things that made me stay away from tech at some point. Somehow just unconsciously they’re introducing bias there. It’s not conscious. It’s just unconsciously getting into it. I can keep adding scenarios, but… I started being more active in this space. And yes, I joined as a Developer Advocate and Data Scientist in the open source AI team and then I became like a Technical Lead. I was in open source for two years. That laid a perfect foundation for the next phase as Trustworthy AI Advocate.

Mia Dand: Thank you for sharing. Your passion really comes through in your stories and it’s inspiring for folks who come from a non-traditional background who are listening to this interview and wondering, “Is there a place for me in this space?” And listening to your journey, listening to the passion, excitement in your voice, and the journey that you followed is going to inspire many more folks especially women to follow.

So speaking of AI systems which are developed by humans, what are you seeing as the biggest challenge in the AI/ML space today?

Saishruthi Swaminathan: Okay, this is an interesting question. And I would like to answer this from different views.

So I see this as a puzzle with many pieces and multiple sides. What I see, may not be what you see, right? What others see. So we need input from people from all walks of life. That’s my first view of the challenge.

Understanding different cultures, behavior, practice, belief, habits, and communication is very important. Every single experience matters and how can we bring them together? We need diversity. We need to have people with different backgrounds. And I want to add something on top of it. We need to consciously bring them together. When I say consciously bring them together, we need to know what is that? What are the values that are being added and what is missing? So, even though we have diversity, we are consciously building it. That’s my first point. So I like to hear your perspective about it and I want to bring in my other points as well.

Mia Dand: Oh, absolutely. Completely agree. I feel like diversity comes in so many different forms and the core of the work that I do with Women in AI Ethics is not just gender diversity, we also look at backgrounds. Women and non binary folks coming from different backgrounds, technical and non-technical. Some of the women that I’ve interviewed at IBM, they come from such a variety of different backgrounds and talents. We interviewed an anthropologist. We interviewed Phaedra who comes from a gaming background. I think it’s very essential to how those perspectives are included. So yes, it resonates with me and please do continue. We’d love to hear what the other points are.

Saishruthi Swaminathan: For sure. So my second thing is self awareness. When we see ourselves clearly we can become more confident, make sound decisions, communicate better, and I feel we are less likely to make unethical decisions. So having that diverse team with the component of self awareness is a blessing. That’s my perspective. It’s like knowing who you are, your empathy quotient. Be aware of your empathy quotient. Sometimes we may not have faced a certain situation so we need to develop the ability to see from people on the disadvantaged side, et cetera.

So I see that characters make the process more efficient and you are in the fast moving world. Sometimes we prefer fast food over home cooked food, right? So you just take a pause and then start building strong characters and all. So that’s my second perspective of the challenge.

And then moving to the third, there are a lot of guidelines, there are a lot of principles, policies that’s coming up. We need to translate those principles to practice and we need to empower people with all the details so that they can actually put all those principles into practice. I want to take it one step further and I’d really like to hear your point, because when I thought about it, this is something that keeps coming into my mind.

Let’s say I’m creating a meeting here that I can help empower people to understand how these policies and practices are connected. So how can I make sure? I know people who don’t have familiarity with the language. I’m not a native English speaker, but I’m here and I’m speaking in English. I took a leap. But it’s going to be different when a native language speaker creates a course and it’s been spread across. The words that resonate are going to be different. How can I recreate it and make sure that they can explain the point in a way, even people who have other backgrounds in their language can understand? Do you see my point? So I just went a step deep. When you are connecting these principles and practices, the language, what you’re familiar with, that to me is an important part as well. So I just would like to hear what you think, because this has been there for a long time in my mind.

Mia Dand: You raised two very good points. The last point was about how we can make this space more accessible because it’s the language barrier but also when someone comes from academia, you see the language that is being used. It’s not accessible for folks who don’t have a PhD in that topic.

AI impacts all of us so we all need to be able to get that basic level of understanding. Every person out there who’s being impacted should have some agency and understand how these systems are impacting them. I couldn’t agree more that we need to relate in a way, in a language that is actually understandable to them, to make it easier, not dumbing it down, but making it simpler. So less jargon and less complicated, even if there are complicated terms. Dismantle it and demystify some of that complexity. I feel that will go a long way.

And I also like going back to your previous point about there’s a lot of talk, there are also policies and guidelines. All of them are important, not taking away from that, but the work that you’re doing, it’s so meaningful because without the right tools that you’re developing, putting them into action is going to be that much more challenging, speaking of that, segueing into what IBM has done, which is to build an entire library of trustworthy AI tools. And you have been very instrumental in building out some of that. Can you walk us through the intent behind building these tools? What does it involve and are these free or do people have to pay to use these?

Saishruthi Swaminathan: Yes. So I’m happy to share about the tools that I’m super passionate about as well. So the first I would like to start with is AI Fairness 360. This particular toolkit focuses on two things. One to detect unwanted bias in data and machine learning models, and second mitigate the detected bias in data or the model that is being built or on the built mode. So that’s fairness.

Next is the explainability. I mean the name by itself is a self-explanatory one. So this toolkit begins in the explainability dimension. It gives you an explanation about the model prediction at both the local level — when I say local level, it’s the data point where the data lives — and global level, which is the model perspective. And these explanations are easy to translate to different persona like data scientists, business owners, users, et cetera. So this one, the explainability side.

I’m now getting into the Robustness. Protection against attack like your model being reversed. So this particular toolkit can help defend and evaluate machine learning models and applications against this adversity, threats, and attacks.

And the next tool we have is AI Fact Sheets 360, and the goal is to increase transparency and enable governance. So transparency can be increased through disclosure. So through Fact Sheets, we provide details about the data. We provide details about how the model is being trained, the architecture, the performance metrics, all the biases that’s been of the diverseness evaluation and when your model can perform better and when it will not.

So these are some of the sections and there is no one set of questions. It kind of varies with how we want, but there’s the Fact Sheet content, overall. As I mentioned before, fairness, explainability, robustness, and Fact Sheet. There are three more.

One is the Uncertainty Quantification 360. So it does state-of-the-art algorithms to streamline the process of estimating, evaluating and improving the uncertainty of machine learning models.

And we have this Causal Inference 360 for quantifying the cause and effect relationships in data.

And finally, AI Privacy 360. It includes several tools to support the assessment of privacy risks in AI-based solutions. And it could help them adhere to any relevant privacy requirement.

So all these tools are open source. It’s open for anyone to contribute. So that’s the beauty of it. We all can build this together. So I hope it answers your question.

Mia Dand: Yes, it most absolutely does. Thank you for working through that and great to hear these are open source so anyone can use them. Let’s talk about the one that you specifically have worked on and contributed to, which is the AI Fairness 360, which is a toolkit, and is also open source. So how does that work and what specific area in trustworthy AI does it address?

Saishruthi Swaminathan: Yes. So AI Fairness 360 is a comprehensive toolkit that helps in, as I mentioned before, to detect unwanted bias in data and machine learning models. First, I’m going to walk you through the detection part and then we can get into the mitigation part since we are covering this more specifically.

There are about 70 metrics in this toolkit that can help in the detection stage. On a high level, we have two classifications. One is the group fairness and the other one is the individual fairness. When I say group fairness, it’s obvious from the name, so we use this protected attribute since there could be a bias and then we split your population into groups. And after you split it, look for some statistical measure to be equal across. You have to make sure the groups are being treated equally. And if you want to get even more deep, in group fairness you have different views. You see everyone has the same ability to do the task. So that’s one view and you have some metrics for that. And the second view is just what you see and what you get. For example my score is 70/100. We just don’t care what kind of triggered or what made me get the score, but we just knew this is 70 so this is what the person’s not capable of. Just what you see is what you get. So this is group fairness, that side of the world.

Then we come into individual fairness and it’s again, self explanatory. We want every similar individual to be treated in a similar way. Let’s say like you and I have almost all the features similar, all the values similar, we also expect to get the same outcome. Let’s say our data is being given to a new system. That’s what individual fairness is all about. It wants similar individuals to be treated in a similar way and if they have the same values, we expect to get the same outcome.

So that’s the metrics one. I hope I’m going slow because this is a lot. So since I’m more passionate, I tend to tell a lot of information. I hope I’m making sense here.

Mia Dand: Yes, absolutely. I’m just making a mental note to invite you again so we can do a deep dive because I feel like there’s so much to unpack here because with 70 metrics and also discussion of group fairness and individual fairness, which is not as simple as it sounds so there’s so much more, but like I said we are definitely going to invite you back to do a deeper dive because I’m fascinated by this space and because it’s so important and so critical to making sure that we get this right. So would you like to continue to add or share some more about it?

Saishruthi Swaminathan: Yeah. So the last part, I just want to go with this mitigation part really quickly, just on a high level. Mitigating bias, we have about 10 algorithms that can help mitigate bias in the data model being built and in the built model.

There are three classifications. Pre-processing that works with data. In processing when you’re building the model. And post-processing when you deal with the prediction. So the choice of algorithm materially depends on where it can interfere in the pipeline. The best practice again, early intervention is the best and all permissible categories should be tested because there’s not one algorithm that is independent of the data set. And again, I want to mention this before we move forward, tools are part of the process in this space. So it’s an important component, in between all the other components.

Mia Dand: I think we should always keep in mind what you said before, “Everybody wants fast food.”

This is actually — a slower process. This is a more robust process. It has to be thoughtful. But I really liked the analogy that you used about how we just want something quickly. And these are not things that should be done in a mad rush, just like one step and you’re done. To your point, the tool is part of the overall process. And I do use a lot of the methodology that you have outlined in the workshops that I run as well. So I couldn’t agree more.

Saishruthi Swaminathan: And also, in the challenges part, I missed saying the timeline part. There is always a compromise in decisions we make in our life as well. All of us are human beings. So what we compromise is very important. A pressurized timeline can disrupt the process and it can make you compromise on other factors. So this is like a whole culture change. You have the tools and the whole culture changes and the process changes. So we don’t have to compromise on this.

So it’s just educating and empowering overall and not just the model subjects, even model creators should be free to feel safe to operate in this environment. Sometimes you unconsciously do. I feel I want to have this empathy quotient in each team to take the decision instead of being put on the spot. So I see this as a collaborative environment. I see this as all of us building this together. We-are-in-this-together kind of mindset can help us really do amazing stuff.

Mia Dand: I agree. There are no villains per se. We’re not trying to villainize anybody or say this person has done XYZ wrong. It’s about building it into your process so from start to beginning that it already becomes part of your company culture, not something you catch after it’s all developed, it’s launched and you’re like, “Wait, wait, wait. There was a problem.” It’s better to have it incorporated ahead of time. Also agree with the point that it has to be part of your culture. This is not something you can just put on, like at the last minute, as an afterthought. So going back to the toolkit itself, when you originally contributed to it, you did it originally in Python and then you expanded it to communities, like R. What was the inspiration behind that?

Saishruthi Swaminathan: Yes. So I co-created this toolkit in R along with two other members. We three closely worked together so I would really like to give them this. I want to say the names as well, Gabriela, Tracy, and myself. Co-created this toolkit in R. Community played a major role to get inspiration. I have always been that community person. I love interacting. I love hearing people’s perspective because that really helped me get into this space. Until now I have met over 20K people, just in the last one and a half years. I just go meet them, talk to them, hear them. And that’s why I’m talking before you today.

Personally, what inspired me to get involved. When I go to these community activities and help people understand this toolkit, oftentimes programming language has become a barrier. If this programming language becomes a barrier, then there’s a chance we might miss an important aspect, which is the community input and feedback, because we want people to use this so they can give us their perspective. They can give us their feedback. They also feel included in the part. Who knows in the programming language barrier, we might miss some important group of people from whom we might need this input. That has personally made me feel this shouldn’t be a barrier. And how amazing it would be if you built this palace community together.

Firstly, that inspired me to be a part and I was a part of these two amazing members and we built this toolkit together.

Mia Dand: I’m really liking your philosophy. And I feel like so much of it is grounded in making these tools accessible, expanding access to other communities. I hope this philosophy is embraced by more people so we can really build a world and we build these systems which are more inclusive of more voices as well. And this being a big one, the biggest barrier, like you said, one of the biggest barriers is just this programming as being the barrier to entry. I really appreciate it.

You currently maintain the AI Fairness 360 Toolkit. And now this is open source, which means that this product, you said it’s open to everyone to contribute, but how is this maintained? Like, can you walk us through like, who else contributes to it and if folks wanted to contribute to it, how would they go about doing it?

Saishruthi Swaminathan: I think that’s an important question for us. To develop this together…so my routine, all the actions that I do with this toolkit involves adding new metrics. I’ll go to them and something new comes up and updating existing ones, and I spend a lot of time creating examples to demonstrate through a notebook or some other assets. And I collaborate with people who are interested in contributing, but they may not be able to understand the flow so I just spend some time with them, walk them through what is there in the toolkit so that they can start contributing. And then I do review code housekeeping work, and of course, like being active on Slack. So that’s the community we have to support the users, to answer questions if they have any. These are some of my day-to-day activities with this toolkit, and I’m going to give you the link to the toolkit and people can actually get in and they can pick an issue to start a conversation or they can join the slack channel to be a part of this great mission.

Mia Dand: That would be fantastic, thank you. We’ll make sure we include that in the blog post as well. So open source tool kits. Lately there has been a lot of discussion around it. As new vulnerabilities are discovered in open source software so I would love your take on what are some of the challenges with open source toolkits and what are your recommendations to mitigate those?

Saishruthi Swaminathan: Awesome. So I’m going to talk about my personal experience with the community introduction here. For championing the toolkit and concept so that you can effectively use them, you need a good learning curve.

Some of these concepts are totally new. When I just scroll through my community page or LinkedIn or whatever it is, we always see a lot of data cleaning, visualization, model development, and their ops tools being highlighted. But now we have these trustworthy AI tools. Tools that you’d need for you to get into data science and AI today. It just needs to be plugged in somewhere. And it brings an interesting twist in the process because all the tools that I’ve mentioned can be used in different pipeline areas. So this brings a twist, right? So we need consistent support from the community, help each other to understand and incorporate this into the existing process.

And we have three tool kits donated to the Linux Foundation right now. And we have this open governance structure there so we were bringing a lot of community interaction to that. So there is a certification, so the vulnerabilities are often not checked. So we make sure everything is safe and secure.

And the next part I want to highlight is to bring more awareness about the usage by building assets. This kind of increases confidence about the usage. Sometimes when I go and meet people, to think this is not for them and it really bothers me so I started building this ABCD kind of assets, right? “Okay, come on. This is easy. So come on, let’s just do this together.”

So that kind of brings excitement. So people will be like, “Okay, this is not something that bothers me. I can learn this.” Giving that kind of confidence and the velocity of change is also whole.

These toolkits, there’s a constant update so we need to keep ours updated, too. I see this, you know, building trust among users that it’s easy to use trustworthy AI toolkits, so that’s how I see it. I hope it makes sense.

Mia Dand: That was great because I’m just thinking through what you just said. One is that community plays a big role, obviously awareness, accessibility, but also making sure that it is updated on a regular basis. Think about AI, technology in general, just move so rapidly that if we don’t stay on top of it, if you’re not updating it constantly, you find these tools losing their effectiveness or relevancy before you know it you’re like 10 steps behind. So yes, good news is it’s community- built. The bad news, like one of areas of improvements, I wouldn’t say one of the gaps is that it is built by communities so you have to constantly shepherd that for sure.

Saishruthi Swaminathan: Yeah. And also I believe the change has actually happened. I can see there is a lot of initiative practices and tool, and I’m so glad that I’m part of some of the main things as well, which makes me feel that next time when you ask me what are the challenges, I want to hear this question, “How bad are the challenges?”

That’s what I’m aiming for. That’s what my belief is that this is the change happening. A lot of initiatives, new policies, practices, tools, the excitement in people. So I would like to hear this question, “How bad are the challenges?” So that’s something I’m really aiming for and just imagining myself being part of 0.001%.

Mia Dand: You know, I’m just making notes, right? To ask you all these questions for next time, because I’m very positive that we will have you again for a much more deep dive conversation about this because the work you’re doing is so important and these tools and toolkits are such a critical point. Critical part of our journey towards more responsible, more ethical and trustworthy AI. So I greatly appreciate you joining us. Do you have any closing thoughts before we let you go?

Saishruthi Swaminathan: Yes. So, okay. This is my recent thought. I think a lot. I’m sorry if I’m just giving you a lot of philosophies, thoughts, and details, but I really want to share this with the world.

I shared this in my recent talk as well. I want to be in this ideal world where we don’t need a Trustworthy AI Advocate. I am seeing myself as a trustworthy advocate and it made me think, “should I be an advocate?” So this is all like the basic things and I want to be a part of this effort and I see this being there very soon. And I see the ideal world in front of me. So when Saishruthi is introducing herself next time, more than a trustworthy advocate. I want to call myself something different, not advocate the trust aspect of it. So that’s what I’m aiming for and I’m just super motivated to be part of it.

Mia Dand: I love hearing that. It’s so important to have a vision for what the ideal state should be. All AI should be trustworthy. We want to get to that end state. And I’m very confident that with the work that you’re doing and all the amazing women that we have met as part of the series and the work that we’re doing through our community make me feel more optimistic that it is possible.

So many brilliant minds are working on this. It has to be and that’s the vision, and that is the end state that we are all shooting for. I so appreciate you joining us today, Saishruthi. Such a pleasure meeting you in person, finally. It’s long overdue.

Saishruthi Swaminathan: Same here. You’re doing amazing work in bringing all of these amazing women together. I have been following up with your podcast, your blog, your posts and just enough kudos and hats off for whatever you’re doing. I’m looking forward to seeing more of it, to learn more from what you are doing and being part of it, if at all possible. So yeah, hats off and more kudos to you for whatever you’re doing. It’s great. It’s amazing.

Mia Dand: Thank you so much. As a community that makes us great. And congratulations on making the 2022 list of 100 Brilliant Women in AI Ethics, which is how I found you. That’s how we learned about your work. So thank you so much again, and look forward to reconnecting with you and doing the deep dive on some of the topics that we didn’t discuss today.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mia Dand

Mia Dand

3K Followers

Builder of useful things. CEO - Lighthouse3, Founder - Women in AI Ethics