Responsible investments in AI — how VCs can make better decisions
For anyone interested in learning more about AI Ethics due diligence for VCs: in early June, we will host an event for VentureESG members in which we will use this framework to evaluate a case study together and apply this tool. More details and registration soon. Follow VentureESG on LinkedIn for announcements.
This project was a collaboration between VentureESG and Ravit Dotan, PhD: an AI ethics advisor, researcher, and speaker. Find the full guidebook and flowchart here.
AI is (becoming) ubiquitous — and needs to be diligenced properly
Today, most companies already use AI. For example, in 2022, Accenture found that 77% of companies both used and provided AI. This technology is adopted so widely because it is versatile and effective. While AI is often associated with chatbots, self-driving cars, or sentient robots, AI’s top use cases are in back-office operations that all companies need, such as automating IT, security and threat detection, and automating business processes (IBM survey).
AI technologies can be very helpful, but they can also be destructive if used irresponsibly. Since AI systems typically make decisions about and process the data collected from masses of people, when something goes wrong it goes wrong at scale. Examples include mass discrimination, violation of data rights, and issues resulting from lack of human oversight, transparency, and explainability.
Attention to AI risks is correlated with improved financial performance. Many are already aware of this fact. For example, a survey by the Economist Intelligence Unit found that 94% of executives said that they think that shareholders get higher ROI from responsible AI, i.e. AI which is developed and used in ways that mitigate the risks. A 2021 McKinsey report found that the companies with the highest returns from AI engage in risk-mitigation practices more often than others.
Against this backdrop, I have collaborated with VentureESG to create guidelines for responsible investing in AI. It is intended to be used by VC funds across all stages of investment. You can find the full guidebook and flowchart here. This article will walk you through five steps to take when diligencing AI companies for VC investment, directly applied to a case study of a startup with strong use of AI.
STEP 1: Determine whether the company requires this diligence
Companies are at risk of creating AI-related harms if they develop AI/machine learning, use AI/machine learning, or process big data. If the company is at an early stage, it is enough that they are expected to fall into one of these buckets.
Applied example: Imagine a pre-seed fintech company that is developing technology for fraud detection in the financial sector — let’s call this company FINTECH. At the very least, they will be processing big data. Moreover, since they will need to analyze that data, they are likely to either develop or use AI. Therefore, they require AI due diligence.
STEP 2: Evaluate risk of conflict with regulation and your investor or LPs’ values
Companies that develop AI, use AI, or process big data are at risk of conflicting with existing and upcoming regulations. The most prominent is the EU AI Act, the bill that is expected to regulate AI in the EU. In addition, AI applications may conflict with your values. For example, investors raised concerns that facial recognition threatens human rights. Investors can divide AI applications into four risk categories, based on the severity of the potential conflicts with regulation in relevant jurisdictions and with the VC’s values:
- Extreme risk: AI applications that are or likely to become illegal and/or excluded by the VCs values.
- High risk: AI applications that are or likely to become heavily regulated and/or are at high risk of conflicting with the VC’s values.
- Moderate risk: AI applications that are or likely to become lightly regulated and/or are at moderate risk of conflicting with the VC’s values..
- Minimal risk: AI applications that are unlikely to be regulated and are aligned with the VC’s values.
See the guidebook for guidelines on how to evaluate the risk of conflict with the EU AI Act.
Applied example (cont.): Our pre-seed fintech company, FINTECH, is at high risk. The company is active in the financial sector, which is highly regulated. Without attention to AI ethics issues, the company may struggle to comply with regulation, e.g.:
- AI may illegally discriminate
- AI may fail to provide legally-required explanations (e.g., on loan decisions)
In addition, applications in the financial sector have extensive social impact and therefore extensive potential for unintended consequences, such as wrongful discrimination. This may create potential conflicts with your VC’s values or thesis.
STEP 3: Evaluate the company’s responsible AI maturity
Companies that pose AI risks can do a lot to mitigate those risks. Investors can evaluate these efforts in the following way. Start with evaluating the company’s (and founders’) AI ethics knowledge, workflow, and oversight:
- KNOWLEDGE — To what extent does the company/ founding team understand AI ethics?
- WORKFLOW — To what extent do the company’s workflows mitigate the risks?
- OVERSIGHT — To what extent do the company’s oversight structures support AI ethics?
Here are metrics that can help evaluate companies on these three dimensions:
Applied example (cont.): At FINTECH, the founders are generally familiar with AI risks, but they think that AI risks are only marginally applicable to their company. They don’t plan to educate themselves further or be proactive. Their knowledge level is low because, while they are familiar with AI harms, they minimize the relevance to their company. Their workflow oversight levels are low because AI ethics is not on the company’s agenda.
After you evaluate the company on their responsible AI knowledge, workflow, and oversight, rank the company’s overall responsible AI maturity, from beginner to advanced:
- ADVANCED — At least: High knowledge, High workflow, and Medium oversight
- INTERMEDIATE — At least: High knowledge, Medium workflow, and Low oversight
- BEGINNER — No minimum requirements
Applied example (cont.): FINTECH should be ranked as a beginner in responsible AI, scoring low on all three dimensions.
STEP 4: Determine investment eligibility
Investors can use the information obtained in this due diligence process to shape their decisions on next steps, including whether to invest in the company or even continue probing, whether to require an external audit, and what AI ethics support to provide it. The rule of thumb is that the greater the gap between the risk and maturity, the more precautions you should take:
The goal of the external audit is to provide a more fine-grained assessment of the risks that the company poses. To conduct the external audit, choose someone who has both technical and ethical expertise and would be able to determine whether the company’s models, datasets, and other artifacts pose regulatory and ethical risks. The guidebook contains a list of some free and non-profit services that can be helpful. There are also many for-profit services, including consultants and platforms you can use.
Applied example (cont.): FINTECH is high-risk but beginner-maturity. That means that you need to take a lot of precautions. If the company had a product that could be audited, it would be sensible to require that they pass an audit or at least get approval from an AI ethics expert as a condition for investment. At pre-seed, before they even have an MVP, an audit would be premature. However, even at pre-seed, it would be a good idea to require that they work with an AI ethics expert on a regular basis. In the case of this company, it could mean that they meet with the expert once a quarter and attend an AI ethics workshop to jump-start their journey.
Step 5: If you invest, provide support
Investors can and should support the growth of their portfolio companies in AI ethics, just like they do in other areas. To support companies in increasing their responsible AI maturity, investors can educate and motivate companies to improve in their AI ethics knowledge, workflows, and oversight.
Applied example (cont.): In the case of FINTECH, helpful support measures include:
- Introduce the company to AI ethics experts
- Sponsor/ subsidize ongoing support from an AI ethics expert
- Sponsor participation in an AI ethics workshop to jump-start their journey or bring in an AI ethics expert to provide a workshop for relevant companies within your portfolio
- Ask for annual responsible AI progress reports as part of the company’s ESG reporting
- Bring the topic up regularly in board meetings
In the guidebook, you can find more details about the due diligence process, suggestions on how to support portfolio companies, recommendations on resources for VCs and portfolio companies, and additional case studies.
About the Author
Ravit Dotan, PhD is an AI ethics advisor, researcher, and speaker. Her specialty is helping investors and tech companies develop responsible AI approaches. Her academic work includes directing the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh’s Center for Governance and Markets. Her private sector work includes leading all AI ethics efforts at Bria.ai, a generative AI startup. Ravit holds a PhD in philosophy from UC Berkeley, her work was featured in publications such as the New York Times and TechCrunch.
For more information about VentureESG and how we’re working to support Venture Capital funds with implementing ESG across their fund operations and end-to-end investment process, fill in this form, or drop us an email at hello@ventureesg.com.
Acknowledgments
This material is based upon work supported in whole or in part by the Notre Dame-IBM Tech Ethics Lab, the Center for Philosophy of Science at the University of Pittsburgh, and the Center for Governance and Markets at the University of Pittsburgh. Such support does not constitute an endorsement by the sponsors of the views expressed in this publication.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
