Here’s how machines can learn to find the best nonprofits.

The beta version of a formula designed to identify top nonprofits.

Angela Rastegar
4 min readJun 9, 2016

Our generation has the power to end the world’s greatest challenges: extreme poverty, inequality, illiteracy, environment sustainability, and more. We have the resources. We’re just not allocating them as effectively as we could be.

Even though we have a lot of data about proven interventions, and thousands of organizations are doing amazing work, that information is not easily accessible to aspiring donors. Most people make giving decisions based on familiarity or branding, and many choose not to give at all. We want to change that.

Our startup, Agora, will be rolling out our beta version of an Impact Scorecard, designed to allow donors to pull a sorted list of nonprofits on Agora’s site based on their estimated impact. Designing such a formula is complex, which is why we’re publishing our logic model openly, and in advance of our beta rollout, for feedback.

A pathway to building an impact database.

Impact Scorecard at a Glance

This scorecard is designed to reward nonprofits for measured impact and transparency without increasing the burden placed on nonprofits to report data in yet another time-intensive process. Much of the scorecard will be designed to leverage the expertise of existing sector evaluators and experts, and aggregate the information in a clear, searchable way (rough analogy — think Rotten Tomatoes and movies).

This scorecard won’t be fully dependent on outside expertise, however. Agora is also a social platform for donors to harness the experiences and insight of their peers. As such, embedded in the scorecard are additional points for groups that have been endorsed by members of the Agora for Good community, or received donations from donors as part of a donor’s network.

This scorecard will include a formula with the following components:

Expert Endorsements.

Any nonprofit that receives funding, awards, a positive evaluation, or other forms of official recognition from nonprofit evaluators or vetters would receive “points” proportional to the level of endorsements, making up as much as 35% of the total score. This includes effectiveness evaluators (GiveWell, Impact Focused Foundations), organizations with impact evaluation standards, and groups with assigned staff or volunteers dedicated to nonprofit evaluations. Scores will be higher for endorsements from experts with published results, dedicated team time, and recognized sector expertise.

Impact Evaluations.

The algorithm will also assign points to nonprofits that have conducted impact evaluations, randomized trials, or other assessments of the outcomes of their programs in relation to their stated goals, and who share these evaluations on their profiles. Nonprofits who can cite relevant impact evaluations on similar interventions, but not ones done on their particular program, will be awarded a fraction of the total allowable points as well. Agora will have its own evaluation of intervention-level systematic reviews, which nonprofits can match their program into. Impact Evaluation points can add up to 12% of the Impact Scorecard total score.

Network Effects.

Nonprofits will be ranked based on the degree of public endorsements they receive on Agora’s platform and, as social elements are added in, based on the degree of support from an individual’s personal network. Network effects can make up as much 35% of the overall score, but may be represented as an optional filter.

Transparency.

Nonprofits which publically complete information regarding their focus problem, solution, and (when launched) longitudinal goals via the Agora platform will also receive points, adding up to as much as 12% of the final score.

Agora: A digital library of social impact data

Open Questions

We recognize that this first iteration of our scorecard has flaws and limitations — our goal is to continually iterate on the algorithm as we collect more data. We also understand that their are some risks to adding a score that relies on nonprofit cooperation and candor, some of which are below:

What about nonprofits who pursue multiple strategies, with differing effectiveness or transparency for each? Nonprofits will ultimately be evaluated along each of their stated goals, with points awarded proportionally based on the strategies used and how effective each is. Points for endorsements, however, will have to apply in their entirety, as most endorsements are for organizations as whole, not specific programs.

What about multiple evaluations, or outdated information? The most recent information will be scored the most highly, but to start we probably won’t be able to tell if a nonprofit is ‘hiding’ new (and negative) information.

Minor differences in interventions can have major effects on results. How do you track that? To start, we won’t have an easy way to review this, but again, we anticipate building out a peer review system that can help sort misrepresented data. We will also reward groups that report direct evaluations of their own programs over those that can only show related assessments.

What happens if a nonprofit is dishonest? We anticipate building a ‘peer review’ system, but we won’t edit the data ourselves unless an issue is flagged. We also are collaborating with reviewing committees such as ImpactMatters who can help test this data.

Will hyper transparent organizations score highly if they are non-impactful? We expect they will not, because we rely on the evaluations of funders who look for impact, in addition to rewarding their transparent reporting. They will, however, still receive some scoring benefits — transparency is a crucial first step for effectiveness, and we want to reward honesty.

Many other questions will arise, and we welcome all — please share your thoughts.

Learn more at www.agoraforgood.com.

--

--

Angela Rastegar

Director @ Circle (Fertility) and Investor @ Correlation. Prior: Interim Fund Manager @ VilCap Investments | Entrepreneur @ Agora | Strategic Advisor