Ethics of AI Surveillance Tech — Conversation with Francesca Rossi
This week, we launched our new series of interviews with brilliant women working in the critical space of AI Ethics and discuss their groundbreaking work. Our first 6 episodes are sponsored by IBM and we kicked off the first one with Francesca Rossi, IBM Fellow and AI Ethics Global Leader. I have admired Francesca’s work for a long time and was delighted to finally get a chance to meet her (virtually). We covered a broad range of topics including her career journey, the importance of multidisciplinary perspectives, and IBM’s stance on ethics of commercially deployed AI systems, including facial recognition technologies.
You can read our conversation below and/or listen to the audio. (You will hear some pinging noises in the background as we didn’t get a chance to mute the notifications and our furry research assistant also decided to join in)
Mia Dand: It’s great to have you here! You are a pioneer in this space and you were on the very first Women in AI Ethics list that I published in 2018. Can you walk us through your journey and how you landed your current role at IBM?
Francesca Rossi: Sure. First, thank you for your interest in my work and being included in that list was really an honor.
So basically, I’ve been a researcher in AI — artificial intelligence — for many, many years so I have a very long academic career in that space, and then at some point, I started thinking, which was not just me, the whole community started thinking about the ethics implications of the use and deployment of this technology in our society.
So little by little, the whole community started building and thinking about it. The impact of the technology, the possible guidelines, the possible best practices, and so on. And so over these last years, really the AI Ethics community made a lot of progress, and a lot of initiatives took place or were started by companies and by countries, by collaborations between countries and between companies and all the other stakeholders in the civil society as well in order to tackle these issues and take care of these issues around AI ethics from many different angles. So from my point of view, I started looking at them when I was still in academia, but then I went deep into these issues when I moved and joined IBM.
I joined IBM as a researcher and so I started looking at this from a research point of view, but also from a company-wide point of view. So, what does it mean to build a technology that is trustworthy, that people can trust? And also not only to build it but also to deploy, to use it. So, in a corporate environment, these are all aspects that are all important in order to really make sure that the technology that is then delivered to the real world has the right properties and is used in the right way.
So my journey went from a more academic kind of engagement with this discussion to a more deep engagement from inside a company that is a lot based on delivering and using AI.
Mia Dand: So much of the ethical progress we have been making in this space can be attributed to leading women like yourself in this space so we are thrilled to get your perspective.
At the height of your 22-year academic career, you took a leave of absence from your tenured full spot of a professorship in Computer Science at the University of Padua in Italy and you spent a year at Harvard University’s Radcliffe Institute. How did that experience influence your perspective and your current approach to AI Ethics?
Francesca Rossi: Oh, that was fundamental because the Radcliffe Institute is an institute where every year it hosts about 50 fellows from all the world, from all the disciplines. So for example, that year I was the only computer scientist and then there were people covering all the other sciences, all the arts, and all the humanities. So it’s really a very multidisciplinary environment. And then what the Radcliffe Institute staff is doing is forcing these people to work together, to spend time together, to discuss topics together, and to give lectures to one another so it was really the time for me to go from talking only to my peers–my AI researchers that I was usually talking to–to people that did not care about my latest result or experiment or theorem but wanted to address some more broad questions about the impact of what I was building and what I was studying on to society, on to people. Say okay, if you started this, you want to advance AI but is it deliberate to the real world? What’s the impact? Is that the impact that you want or not?
So they really forced me to think about these questions and together with them–and I still have collaborations with some of them that came from different disciplines–but then they also started working on AI ethics as well, from philosophers, psychologists, and others. And so that was really the starting point for me to really think about these issues. That was really the turning point in my career.
And I have to say, the reason why I applied for the Radcliffe Fellowship is that so many of my colleagues told me that for them–they were also Radcliffe fellows–it was also a turning point in their careers so I was curious to see what that would mean for me. Then I figured out what it meant to go from, of course, I continued working in AI as well, but to really focus on AI ethics.
Mia Dand: I loved hearing about the multidisciplinary approach that has influenced your perspective, Francesca. So you said that your job at IBM, a big part of it is building an ecosystem of public trust. In a time when the headlines are filled with details of unethical transgressions by Big Tech, that must make your job considerably harder. How are you continuing on your mission to build that trust?
Francesca Rossi: Well, first of all, not every company using or deploying AI is using the same business model. So there are fundamental two big, broad categories of business models around AI. There is the business model of the companies that collect a lot of data and then they use their data with the help of AI to optimize or to improve their operations. Then there is another business model, which IBM is mostly focused on in using AI to build services that we deliver to other companies like the government or an airport or a hospital or a financial institution to do their service, to do their operations, right? Whatever they need to do.
In the second business model, we as IBM, we don’t really collect data. We don’t collect huge amounts of data from single individuals and then we try to monetize that data in order to do whatever we need to do.
In that kind of business model, we need to be very careful, we need to be very clear about how we use the date of our clients, which are these other companies. And we need to be very careful about the properties of these AI services that we deliver to our clients. Whether they are fair or transparent or explainable or robust and so on.
We have less problems when dealing with data of single individuals that is typical of the AI business model that I talked about. So most of the headlines that you see are mostly about really using data of individuals in ways that are not transparent enough or are not consensual.
So that is less of an issue for IBM and in general for the business model that IBM follows. But of course, we still have to work with our ecosystem of partners and clients to implement internally the best practices in order to deliver, for example, AI models that do not make discriminations. For example, that are fair. So in a way we can detect and mitigate AI bias as much as possible, as well as all the other pillars that have something to do with privacy, but also with explainability, transparency, fairness, robustness, and so on.
Mia Dand: IBM has also taken the lead along with other private companies in providing guidelines for the responsible use and development of AI technologies. Can you walk us through what that approach entails, including how the AI Ethics Board functions as you have co-chaired it since 2019.
Francesca Rossi: Yes. The internal governance within the company regarding AI ethics and every decision we make around AI ethics is based on this mechanism, which is the AI Ethics Board that has representatives from all the business units of the company. And that is co-chaired by me and by the Chief Privacy Officer of IBM. And the reason why there are representatives from all the divisions, all the business units is that one of the goals is to bring awareness and coherence about what the whole company is doing around AI ethics. And then because these people are not just any person in that business unit but they are people that have decision power in those business units so once the board makes a decision, they can go back home and actually implement those decisions in their respective business units.
So what the board is doing is having a lot of initiatives around AI ethics. From practices and methodologies for our developers for example, as I said to detect and mitigate bias. What they need to do. What are the definitions of fairness to consider? What are the protected variables? What are the thresholds for bias that are acceptable? What are the tools to be used? What is the governance to be put in place in the AI lifecycle and how to integrate this methodology with other methodologies like security by design and privacy by design, how to integrate it with them. So best practices and guidelines for our developers.
But then also, we do a lot of educational materials for developers more deeply, but also for every IBMer, that mandatory educational module so to learn and be aware of the risks inherent with AI technology.
And then we also do a lot about evaluating offerings that the company has towards our clients. So we meet very regularly, almost every week to evaluate new offerings that a product team brings to the attention of the board and to see whether they are aligned to our principles or if we need to change something in the technology, or in the use case, or in the contractual agreement so that we make it aligned to our principles.
The AI Ethics Board is also supported by so-called focal points. These are people distributed in all the business units that are the link between the decisions of the board and everybody else in that unit. They bring the decisions and make them visible and understandable to the people in that unit. They also, in the other direction, bring grassroots initiatives from that unit to the attention of the Board that possibly scale them or even support them at the level of the whole company.
We also have an advocacy network of passionate people everywhere at IBM that are engaged in specific projects of the Board just because they are passionate about this topic, it’s not because it is their own everyday job, but really because they want to help. So we have a mechanism to engage with these people because contrary to other topics, AI ethics is really a topic where people are very passionate about and they are willing even to go over what is required by them, but to really help the whole company in delivering AI which is beneficial and positive.
Mia Dand: I think it’s a great sign when so many people are so passionate and they’re taking an interest in driving this forward because we do need a more inclusive approach. Coming back to your own experience and your background not just inside of IBM where you are influential, but you have held far-ranging impactful roles outside of IBM.
You were a member of the European Commission’s High-Level Expert Group on AI. You have been the President of IJC AI, which is the Internal Joint Conference on AI. You were also the Editor-in-Chief of the Journal of AI Research. Currently, you are on the Board of Partnership on AI, where you represent IBM, which is a founding partner. You’re also a member of the Steering Committee and Responsibility Working Group of the Global Partnership on AI (GPAI). And you’re also the President-elect of the Association for the Advancement of Artificial Intelligence (AAAI), which is one of the largest conferences on AI. That’s a lot. How have you seen these organizations consider the impact and include voices from marginalized communities who are harmed or impacted most by these technologies including surveillance technologies.
Francesca Rossi: First of all, these engagements with all these other organizations, some of them are related to AI research, so as a researcher, I’m engaged in contributing and leading the AI research community. But some others are related to AI ethics, specifically, like the Partnership on AI or the Global Partnership on AI, and so on.
Most of them, especially those that relate to AI ethics are also part of the IBM approach that is not only looking internally at all these things I mentioned but also working with many other companies and other stakeholders to really understand together the best practices and what is best to do about this technology in terms of the development, deployment, and use of the technology. That’s why we were founding partners of the Partnership on AI. It’s a multi-stakeholder organization that defines the best practices around AI and many other initiatives. We work with the UN, with the World Economy Forum, with various governments like the European Commission, and so on.
So that’s very important because a company — no matter how large or how global — can’t identify the issues and find the solution by itself. It needs a multi-stakeholder approach so that’s why we’re engaged with so many of these.
Then in each one of these associations or organizations that I work with, I see there is a lot of emphasis on tackling issues like related to fairness or other issues related to AI ethics. For example, Partnership on AI has a project called Diverse Voices that tries to understand how to bring all the diverse voices into the discussion about what to do around AI ethics.
In another engagement with the World Economic Forum, I co-chair one of their councils on AI and there is really a lot of diversity in member countries. Even people from very diverse communities or backgrounds like the Maori communities in New Zealand or Global South. So diversity is really one of our focuses.
You have this association, AAAI, which is a worldwide association for AI researchers. Every time we have our conference, every year, we have a lot of initiatives around gender inclusion, for example, to help young women but also senior research to really fill the gap in gender participation and also contribution and leadership in the community.
Then I think about SIGCHI, which is another worldwide conference that happens every year in the summer all over the world around AI. Next year, for example, there will be a new track on using AI for the advancement of the sustainable development goals of the UN that I’m co-chairing. But you see that they are really everywhere in these associations or institutions or organizations that are related to AI, initiatives that are linked to the concept of AI ethics or how to identify the issues, how to tackle these issues together.
Another example is also, a lot of work is guided by the standard bodies or organizations like ISO and IEEE where a lot of standards working groups now are focused on AI ethics and what the best practices are around building AI.
Around facial recognition, there are some initiatives related to facial recognition. For example, even the Partnership on AI had a report on educating people about the capabilities and limitations of facial recognition and even in understanding the difference between facial recognition, facial identification, facial authentication, and identifying the different risks related to these different technologies. Everywhere you look in these organizations, there are initiatives related to AI ethics.
Mia Dand: That’s music to my ears. And especially when you mentioned gender inclusion, because the representation of women in this space is so low and over the years, there have been many initiatives trying to get more women in this field. I feel like the impact that these organizations have in driving this forward cannot be overstated. So I’m glad to hear that’s definitely on the radar and there’s progress being made in that area as well.
To wrap up our conversation today, I would like to start with an announcement that IBM had made, which I give you (IBM) a lot of kudos for because they’re walking the walk, not just talking the talk about their approach to responsible AI. Back in 2020, IBM made the notable announcement to cancel its Facial Recognition Program, which was widely commended. There was a powerful statement from the IBM CEO at that time Arvind Krishna who said, “IBM firmly opposes and will not condone use this of any technology, including facial recognition technology offered by the vendors for mass surveillance, racial profiling, violations of basic human rights and freedoms”. I would love to hear your thoughts on what this decision signifies for the future of AI.
Francesca Rossi: First of all, that decision focuses on yes, facial recognition but also on the use of technology, including facial recognition. But more broadly, technology for mass surveillance and for uses that impact human freedoms and rights and so on so that’s much broader than facial recognition. Then other companies followed and released similar statements. But this is very important and I think for a lot of stakeholders to really think about if there are uses that we don’t want to see for this technology, not just again, facial recognition, but in general. Whatever technology you put in place — AI or facial recognition or other things that imply this massive surveillance or impact on human rights.
If you look at the recent European Commission AI regulation proposal — which was released in April of this year and is still being discussed, it will be approved by the European Parliament. Then once approved, all of Europe will have to comply to that — that identifies four levels of risks. The highest one is called unacceptable risk. In that level of risk, there is the use of facial recognition by governments in public spaces. So it seems that this idea of using technology for mass surveillance, at least in this part of the world, not just the U.S. but also Europe is not something that we would like to see.
And so this statement by IBM was also resonated positively with other companies with the European Commission approach, and so on.
In terms of what it means for IBM, well, the statement is that we don’t release any longer…and not only we don’t approve, but also you have read the statement that says we don’t offer, but also we don’t condone. What does it mean? It means that we don’t partner with other companies that may release that kind of technology for those purposes even if we don’t release it but we don’t get together in a deal with companies that do that, okay? So that’s again, an even broader statement that, of course you can imagine that for a company may eliminate some business value because there may be clients that ask for that kind of technology, used in that way in certain parts of the world, especially to a global company, because IBM has offices everywhere in the world. But I think that overall, any company should be focused on its own values and the values it conveys with its actions, its operations, the technology that it delivers, and the trust of its own clients. We believe that these kinds of uses of the technology is not compatible with our values and it’s not even compatible with keeping, improving, supporting, and protecting the trust of our clients. Overall, it doesn’t matter that some business value is eliminated because we don’t release the technology used in that way. Overall, a company has to stick to its values or to the role that it wants to play in society and what it brings to the ecosystem of the company as well.
It was refreshing to hear that in a world where tech companies are chasing the dollar, engaging in unethical practices, to see IBM take a very strong stance in favor of ethical business practices.
Join us again next Tuesday for our conversation with Phaedra Boinodiris, Trust in AI Business Transformation Leader at IBM to hear share how she made the jump from gaming to AI, what keeps her up at night, and her plan for increasing inclusion of women in the male-dominated tech industry. Subscribe to our “AI Ethics Diaries — Keeping it Real” podcast series to get deep insights from amazing women doing ground-breaking work in this critical space.
The views in this interview are those of the person being interviewed and don’t necessarily represent IBM’s positions, strategies or opinions.