Selecting Directors Using Machine Learning
Isil Erel, Léa H. Stern, Chenhao Tan, and Michael S. Weisbach*
Sponsored by Sov.ai
Can algorithms assist firms in their decisions on nominating corporate directors?
A company’s board of directors is legally responsible for managing the company. In principle, the board of directors reports to the shareholders and represents their interests. In practice, however, there is much variation in director quality and the extent to which they serve shareholders’ interests. Because a CEO often effectively controls the director selection process, he will tend to choose directors who are unlikely to oppose him, and are also unlikely to provide diverse perspectives that will help him maximize firm value. It was 1776 when Adam Smith published The Wealth of Nations and pointed everyone’s attention to this agency problem. In our study, we show how a 21st century technology, machine learning, can help companies select higher quality directors.
We construct algorithms to make out-of-sample predictions of director performance.
We develop algorithms with the goal of selecting better-performing directors. A clear measure of director performance is publicly available: the fraction of votes a director receives in the shareholder elections. This vote reflects the support a director personally has from the shareholders and should incorporate all publicly available information about the director’s performance. Using shareholder support for individual directors in subsequent elections and firm profitability as performance measures, we construct algorithms to make out-of-sample predictions of these measures of director performance.
We first examine whether our algorithms can accurately forecast the quality of directors who are actually chosen by firms. In this way, we can show how machine-selected directors differ from management-selected directors to shed light on the director nomination process.
We assess the algorithm’s performance on a large sample of new independent directors appointed to boards of large publicly traded U.S. corporations between 2000 and 2014. Our algorithm using XGBoost can accurately predict the success of individual directors, and in particular, can identify which directors are likely to be unpopular with shareholders. In contrast to the machine-learning models (colored lines in below figure), standard econometric models (i.e., OLS model) fit the data poorly out of sample and actual performance of individual directors is not related to the predictions of performance of models based on traditional standard statistical approaches.
Tests of the quality of these predictions show that directors predicted to do poorly indeed do poorly compared to a realistic pool of candidates.
Then we consider whether the algorithm could suggest plausible alternative choices of directors for the firm who would have done better. For each board appointment in our test set, we construct a realistic pool of potential candidates: directors who joined the board of a smaller neighboring company within a year. Presumably these potential candidates would have found the opportunity to be on the board of a larger nearby company attractive, since directorships at larger companies tend to be better paying and more prestigious than directorships at smaller companies. They also signaled that they were available and willing to travel to this specific location for board meetings. Although we do not observe the performance (i.e., the label) of those potential candidates (the selective labeling problem), the design of our candidate pools allows us to observe what we refer to as their “quasi-label”: their performance on the board they effectively joined.
Compared with a realistic pool of potential candidates, directors predicted to do poorly by our algorithms rank much lower in performance than directors who were predicted to do well. In addition, individuals who were predicted by the model to perform well, and did accept directorships at nearby firms, also performed better at those firms than the directors who were chosen by the firm in question.
Predictably poor performing directors are more likely to be male, have more past and current directorships, fewer qualifications, and larger networks than the directors the algorithm would recommend in their place.
The differences between the directors suggested by the algorithm and those actually selected by firms allow us to assess the features that are overrated in the director nomination process. Deviations from the choices of the algorithms suggest that firm-selected directors are more likely to be male, have a finance background, have previously held more directorships, have fewer qualifications and larger networks. A plausible interpretation of our results is that firms that nominate predictably unpopular directors tend to choose directors who are like existing directors, while the algorithm suggests that adding diversity would be a better idea.
In a sense, the algorithm is telling us exactly what institutional shareholders have been saying for a long time, that directors who are not old friends of management and come from different backgrounds are more likely to monitor management. In addition, less connected directors potentially provide different and potentially more useful opinions about policy. For example, TIAA-CREF (now TIAA) has had a corporate governance policy aimed in large part to diversify boards of directors since the 1990s for this reason.
Overall, Machine learning holds promise for understanding the process by which governance structures are chosen, and has potential to help real-world firms improve their governance.
We emphasize strongly that algorithms complement rather than substitute human judgement. Algorithmic decision aids could help firms identify alternative choices of potential directors, thereby opening up board seats to a broader set of candidates with more diverse backgrounds and experiences, who would have otherwise been overlooked.
Erel: Fisher College of Business, Ohio State University, NBER, and ECGI (email@example.com); Stern: Foster School of Business, University of Washington (firstname.lastname@example.org); Tan: Department of Computer Science, University of Colorado, Boulder (email@example.com); Weisbach: Fisher College of Business, Ohio State University, NBER, and ECGI (firstname.lastname@example.org). You can download the academic article here: