Artificial Intelligence and Corporate Social Responsibility

Interview with Dunstan Allison-Hope, Managing Director, BSR

Roya Pakzad
Humane AI
6 min readFeb 1, 2018

--

Accountability.

How many times have you heard this word when talking about a company’s role in our society? Either during a casual talk with a friend about self-driving cars, or reporting on automation and the future of work, or in corporate boardrooms, or on the streets protesting for your privacy protection, I bet you have heard and used this sentence: “Companies should be held accountable.”

The rapid advancement in AI technologies has opened up new challenges for companies with regards to their social responsibilities. A company whose motto once was to “move fast and break things” now finds itself compelled by their customers, civil society groups, governments, shareholders, and perhaps their own conscience to hit the brakes, look backward, and move cautiously toward future.

Dunstan Allison-Hope and the organization he’s a part of, BSR (Business for Social Responsibility), is one of those forces working to direct companies towards a better path in terms of their commitment to ethics, human rights, and sustainability. As a managing director at BSR, Dunstan has worked on diverse range of corporate social responsibility issues including privacy and freedom of expression, human rights, stakeholder engagement, and transparency reporting in different parts of the world.

Below is my conversation with Dunstan about tech companies’ social responsibility with regards to AI and other emerging technologies.

Dunstan Allison-Hope, Managing Director, BSR (Credit: BSR)

Roya Pakzad: Dunstan, I have been following your work for the past year and based on your publications, I realized you have a strong interest in corporate social responsibility with regards to Artificial Intelligence. Why is that?

Dunstan Allison-Hope: I’ve worked a lot with tech companies on privacy and freedom of expression, and AI is a natural extension of that. These big technologies are going to change businesses and raise new social, ethical, environmental, and sustainability issues and that is what we at BSR care about.

Roya: What are some examples of those risks that companies should care about?

Dunstan: Specific risks vary company to company and industry to industry, but it’s fair to say that the product that is launched might have adverse human rights impacts. These could include privacy violations, freedom of expression issues, discriminatory impacts, and impacts on children’s rights online. Children increasingly participate online and are active digitally. They have rights and they are an especially vulnerable group.

Roya: The UN Guiding Principles on Business and Human Rights (UNGP) has defined certain principles in terms of corporate responsibility to respect human rights. How can those principles be applied here?

Dunstan: UNGPs talk about the due diligence process — that means having a commitment to human rights and assessing actual and potential adverse human rights impacts. Companies should engage to identify those potential adverse impacts and put in place mitigation plans to address them. UNGP is pretty clear on that, but I think technology is complex and this makes it difficult in practice. The challenge for technology companies is that a lot of impacts happen through the products’ use phase. So, how can you assess the potential adverse human rights impacts when you don’t know how these products are going to be used? That’s challenging, but it should not be an excuse for companies. Companies can put policies and processes in place to prevent those risks and make sure all those [adverse impacts] are factored in during product design and release.

Roya: I understand that those policies and processes should be based on certain principles and guidelines. Currently there are several guidelines, code of ethics, and principles. But, I don’t quite see a concrete implementation plan of those guidelines on industry level. How can companies implement those guidelines into their practices?

Dunstan: The current principles are very high level but there is enough similarities between them at this stage that they set very good direction for companies. I think the challenge is the fact that we don’t know what “good” looks like in terms of how to actually implement them. For example in labor standards and supply chain, we already know how companies should implement them. When it comes to AI, it’s so new. What we need is the combinations of real life examples and case studies. By looking at different use cases and real life examples you might realize some principles need to change in practice.

We will also need industry specific version of these guidelines. For example, how to apply good ethical AI guideline for financial services, how to apply it for criminal justice, how to apply it to the context of social media platforms. For example, in the US, as a result of civil rights protections, there are various things that companies are not allowed to do and AI is subject to those rules. But they might have some loopholes and risks because those principles are written for a different age and government tends to be behind when it comes to technological developments. For example, have a look at the Net Neutrality debate, telecom regulations, online privacy rules. Government tends to move more slowly than technology does.

Roya: In the past you proposed the concept of Human Rights by Design for technology companies. How can companies apply that concept in their social responsibility efforts?

Dunstan: So in normal human rights impact assessments, it is typical for a company to take a cross functional approach. It might be run by the legal team or public affairs group, but typically human rights impact assessment is overseen by cross functional teams. The the human resource team, legal, public affairs, social responsibility, and supply chain groups all typically participate. But engineering function or product development teams are usually absent. This is the blind spot. You might not need to change the actual impact assessment tools very much, you might not need to change the questions very much, but you should change who is participating. And I’m not convinced that is happening. There should be different sets of communities that should get involved, including engineers, data scientists, and product development teams in general.

The other issue is that in practice a lot of human rights impact assessments are on the market or country or company overall. They are rarely on specific products or product categories. I think we need more human rights impact assessment at the product level. For example, on new types of communication products, and new types of big data and analytic tools that companies didn’t have before. Products themselves should be subject to assessments — perhaps an extended version of today’s privacy by design methods.

Roya: Any successful examples among companies?

Dunstan: Some, in the context of broader projects, but not nearly as directly as would be ideal. Microsoft is doing Human Rights Impact assessment for AI which will be very interesting to see what they’ll conclude. That’s a good example.

Roya: With regards to applying human rights standards, do you think technology companies respond better to voluntary regulations or hard regulations?

Dunstan: I think both is the answer! I read a very interesting article the other day — I believe I linked to it via your newsletter — about regulating specific topics, such as access to credit, rather than AI overall, which might cause many different types of unintended consequences. I also think that whether voluntary or mandatory, approaches need to work with the grain of existing internationally agreed frameworks for sustainable business, such as the UN Guiding Principles, the OECD Guidelines for Multinational Enterprises, and the G20/OECD Principles of Corporate Governance. Personally, I’m a big fan of disclosure requirements and transparency as drivers of improved performance and accountability.

Roya: Any final thoughts to share?

Dunstan: There is a need to bring together more actors more deliberately than what is currently happening. Sustainability teams and social responsibility teams have long history of engaging with big social challenges and they need to be more engaged in the ethics of AI. But that debate also needs engineers and data scientists. These kinds of multi-disciplinary approaches are essential and there is room for improvement there.

We wrapped up here. This conversation was part of the interview series for my newsletter Humane AI. I will continue talking with both policy and technical experts in the field of ethics of AI in future installments. Tune in to know their opinions about many issues including cybersecurity and AI, machine learning in disaster management and humanitarian context, Human rights and AI for social good, and much more. To subscribe to the newsletter, click here.

--

--

Roya Pakzad
Humane AI

Researching technology & human rights, Founder of Taraaz (royapakzad.co)