I’d love to see how removing humans from the equation affects the number of underrepresented founders who receive funding. Leaving aside for now the question of whether an algorithm can outperform or replace a human investor, could an algorithm be tuned to judge founders impartially? I assume that you would train your model by creating some dataset that captures the essential characteristics of successful and unsuccessful companies. How could we take care to make sure that what we build doesn’t inadvertently discriminate against underrepresented founders by putting too much trust in criteria that are more strongly correlated with race or class than with the ability to build a company (e.g., school, target market, etc.)?