How are we selecting our inaugural cohort?
We have previously shared our perspective on the question “Who could make a great Visible Hands fellow?” (If you haven’t read about our personas yet, you can check them out here). In that piece, and on many other occasions, our team has expressed that we are aiming to look beyond traditional indicators of talent when selecting our inaugural cohort.
In VC, pattern-matching is pervasive. It is common for investors (intentionally or not) to favor founders who are similar to other successful founders. Alumni affiliation, geography, and one’s gender and racial identities are all factors at play here.
As Daniel Applewhite pointed out in 2018, “Pattern recognition has enabled VCs to mitigate risk but has also limited their profit potential and created an inherent funding bias. This bias stems from barriers to early-stage capital […] and is perpetuated by systems of racism that destroy opportunity within communities of color.” By basing investment decisions on previous models of success, investors reinforce the exclusion of many capable founders who may not fit the mold.
At Visible Hands, we wanted to generate a new model for evaluating founders. As such, we have intentionally designed our cohort selection process to reduce bias. Here are four key measures we have implemented so far:
Minimizing reliance on resumes
In the initial review stage, evaluators do not have access to applicants’ resumes or LinkedIn profiles. Instead, applicants apply for our fellowship program by answering a series of short answer questions focused on their strengths, skills, and competitive advantages. In moving applicants forward in the selection process, we focus solely on the quality of these answers, rather than on the credentials listed in their resumes.
Standardizing interviews and assessment frameworks
We have standardized our application and interview questions to get at the heart of our key traits. In selecting these evaluation criteria, we were intentional in selecting traits that were not packaged in masculine, upper/middle-class, and white defaults, a best practice borrowed from researchers Sapna Cheryan and Hazel Rose Markus who studied the cultural bias of masculine defaults in male-dominated fields. Instead, we center traits like resilience and resource magnetism, which can be demonstrated in a variety of ways.
With applicants receiving the same set of questions, we do not need to depend on organic conversation to reveal what we want to know about their experience and expertise.
Additionally, in each stage of our assessment process, we implement scorecards to grade applicants’ responses on a predetermined scale. This allows all evaluators to follow the same framework in assessing applicants, keeping everyone focused on the predefined factors that matter most.
Implementing a collaborative review process
We are using a community-driven review process to ensure that every applicant gets a fair look. In the first review stage, each application is independently reviewed and scored by a diverse group of evaluators from the Visible Hands Ecosystem. This minimizes the potential for groupthink and encourages a difference of opinion amongst our evaluators.
In all subsequent stages, multiple evaluators are involved in reviewing applicants to collect a host of varied perspectives.
Training evaluators on race and gender bias
We led our evaluator volunteer training in partnership with a research team at Boston University. As part of this training, we collected stories from underrepresented founders in our network about their experiences interacting with investors. We had our volunteers listen to these stories and participate in a perspective-taking exercise to generate empathy and understanding. Perspective-taking is a practice in which we actively consider others’ psychological experiences to combat automatic expressions of bias without decreasing our own sensitivity to inequalities.
Additionally, we shared some of the latest research on the experiences of underrepresented entrepreneurs and collectively discussed ways that Visible Hands could foster an equitable and comprehensive evaluation process. Evaluators left the training with an awareness of the ways that implicit bias can manifest and a deeper understanding of the current environment that Visible Hands is trying to disrupt.
By implementing these practical measures for a fairer selection process, we improve our chances of finding exceptional fellows who may otherwise be overlooked. However, we’re not stopping here — we know that our selection process will continue to improve and evolve as we take away critical learnings from this inaugural cohort.
Have ideas, feedback, or recommendations? Send them our way: firstname.lastname@example.org.