Reflections on ML models of first impressions

Alex, Jordan, Josh, Stefan & Tom
20 min readMay 2, 2022

You cannot determine the identities, attitudes, or competencies of a person from an image of their face.¹ But some people believe they can², and irrespective of these beliefs³, people show high levels of commonality in their superficial judgments⁴ ⁵ ⁶ ⁷ and act on these inaccurate judgments⁸ ⁹ ¹⁰ in ways that cause harm.¹ ¹¹ Our work aims to understand the problematic sources of this agreement; this is critical not only to diagnosing, but to combating rampant discrimination.¹² As a step in this direction, we set out to quantitatively model these biases at scale by collecting more than a million ratings of 1,000 synthetic faces from over 4,150 participants and using modern machine learning methods. The resulting paper was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the excitement of completing a long and intensive project that we believe has the potential to advance our understanding of face perception, we tweeted about the paper. This generated a substantial negative response, which centered around themes of race and the ethical use of AI. We have taken this opportunity to reflect on these issues and on how we communicate about our work. Here, we want to acknowledge and engage with some of the concerns that were raised. We find value in these critical inquiries around ethical applications of face-processing technologies and encourage our various fields to have conversations about them together. We also take concerns about social justice and the responsible use of data seriously.

These are complex issues, and we acknowledge that the way we communicated about them on Twitter included errors of judgment. This longer document is intended to provide some of the context behind our work and explain our reasoning about the ethical issues involved. We begin by telling the story behind the paper and how it was motivated by trying to develop better tools for quantitatively measuring people’s biases and reducing the use of photographs of real people in psychological research. We then discuss research in psychology on first impressions and its relation to the reprehensible practice of physiognomy. Next we discuss the Twitter thread and its impact. We continue with a discussion of A.I. ethics, where we identify the motivating use case for the technology, its possible misuses, the risks and benefits associated with it, and the specific risk-mitigation steps that we took. In particular, we describe how we limited information released about the model and leveraged the “ethical licensing” approach to protecting against dual-use harms in research contexts. We end with a reflection and some concluding remarks.

The story behind the paper

This paper came about as the result of a friendship between Stefan and Josh. Stefan was born and raised in Trinidad & Tobago and came to the US to attend college. He grew up mixed race in Trinidad, which is not unusual there — around 23% of the population is mixed to some degree.¹³ ¹⁴ However, on coming to the U.S., he found that people had an especially hard time “placing” his race, and, moreover, that they felt the need to do so. As a consequence of this observation, part of Stefan’s research focuses on understanding the perception of race — a focus that brought him to Alex’s lab. However, he faced difficulties when studying these problems in the laboratory. In order to study face perception, one needs pictures of faces — lots of them. Control is important, too: pictures of faces often differ in many ways, and it can be difficult to isolate the effects you want to study. Diversity is another big concern, as (for example) white faces are over-represented both in available stimulus sets and in published research at large, thus reinforcing historical disparities and hierarchies in science.¹⁵ ¹⁶ And when doing research with images of real people’s faces, there are necessarily restrictions on their use, in part out of respect for those people’s privacy and dignity.

Over the years, Stefan spent countless hours using almost every method available to curate and transform the face images needed to answer his research questions. This process was difficult and time-consuming, and the methods that existed at the time had various trade-offs between realism, stimulus control, diversity, and privacy. Frustrated by these issues, he thought of creating a tool that would let researchers like him create highly realistic, well-controlled, racially diverse, and yet privacy-respecting face stimuli. He read about the advances being made in face generation technology using deep neural networks, and discovered the ideal collaborator in Tom’s lab next door. Josh had been working on methods for comparing deep neural nets to human judgments, and had recently been working on a system for generating realistic face images with Jordan. So we all got to work, and eventually created exactly the tool that Stefan had envisioned. By making it possible to study biases in face perception at large scale with realistic and diverse stimuli, we hoped to be able to begin to understand and chip away at some of the barriers between people that those biases create.

Physiognomy and first impressions

Some of the responses to our tweets linked our work to the discredited and reprehensible practice of physiognomy: inferring character traits from physical measurements of the face and skull, which historically developed from racist ideologies. We were forthright in our paper that we do not believe such associations exist, stating that “these attribute inferences, especially those of the more subjective or socially constructed attributes, have no necessary correspondence to the actual identities, attitudes, or competencies of people whom the images resemble or depict”. Our model does not, cannot, and should not be used to infer character traits from physical measurements of the face and skull, and this was never the goal of our work. Rather, we quantitatively model the superficial judgments that people make about others based on their appearance. These judgments have substantial negative societal impact, and by understanding where they come from, we can begin to work to reduce that impact. The automaticity and insidiousness of these superficial judgments is one of the factors that has made it so difficult to eliminate the ideas behind physiognomy. In fact, even people who explicitly reject physiognomic beliefs tend to be influenced by superficial judgments based on appearance.³ This is why models of first impressions are so important in psychology.¹²

The study of face perception is an active area of research in psychology, and some of us have dedicated our careers to understanding how people form potentially biased first impressions. (For more details on the history of physiognomy and its difference from modern research on face perception, see Alex’s book¹ or this recent essay.¹⁷) Despite having affiliations with business schools and computer science departments, all five of us are psychologists concerned with revealing the mental representations that bias our judgments, decisions, and behaviors.

Many people who saw our tweets were concerned about the biases demonstrated in our data and model. To be clear, unlike other applications of machine learning where biases are undesirable, those biases were exactly what we were trying to study. We focused on measuring the biases of a population that is often recruited (potentially problematically, we recognize¹⁸ ¹⁹) in behavioral studies: paid crowd-workers on Amazon Mechanical Turk. This was intended to make our model useful to psychologists and other social scientists, but our methods can also be used to explore how these psychological judgments differ across more diverse populations. The biases you see in our results do not merely recapitulate, but elucidate many undesirable biases in American society that we hope this work can help to reduce. We do not endorse those biases and we do not want them to be a part of any automated system that makes decisions about people’s lives; we have taken steps to prevent just that (for further discussion, see the section below on mitigations of harms associated with dual-use technologies). By understanding the biases in people’s minds and where those biases come from, we can work towards decreasing their influence on both human judgments and AI systems.

The Twitter thread

Most of the conversations on Twitter arose from responses to the first tweet in a thread that included a brief description of the content of the paper and a GIF. Creating and including that GIF in the thread to promote our work was an error in judgment. The GIF showed an animation of four changing faces with words superimposed. The animation was meant to demonstrate how changing variables in our model would alter the images that were generated along certain attribute impressions. The words showed which attribute impressions were being manipulated. However, it is now clear to us that this interpretation of the GIF was not apparent in the context in which it appeared. As a result, what many readers perceived was something else entirely: merely four images with words on them, where the pairing of the images and words included a white man with the word “trustworthy” and a Black man with the word “dominant,” as though we were labeling those faces with those traits. We had tried to use a set of images that reflected the diversity of the faces in our dataset, but ended up with pairings of faces and words that recapitulated stereotypes. This outcome was contrary to our beliefs and the goals of our work.

The Twitter thread also included a tweet where we offered to transform images that people posted in retweets. Our intent in offering this demonstration was to enable people to engage with stereotypes on a more personal level by seeing their own faces transformed, something that we had done ourselves and regarded as a worthwhile source of reflection about how we are perceived by others, and perceive others in turn. The intended audience for our tweet was our own research community, where people know and trust us. We recognize that such an offer from a stranger or someone who has not established trust could be uncomfortable and feel inappropriate. Some people raised concerns about data privacy related to this offer. In the end, none of these images were downloaded, transformed, saved, or used by us in any other way.

A.I. ethics, use, and misuse

Another set of concerns was about the unethical use of AI, and in particular how technology connecting images of faces to psychological judgments could be abused. These are important concerns and we welcome a critical discussion of these issues as they relate to our work.

There have been a number of previous applications of AI or machine learning methods to faces that we agree are unambiguously unethical, particularly attempts to classify people’s actual (or what is incorrectly assumed to be actual) physical or psychological attributes based on their facial features. For an in-depth treatment of this modern-day form of physiognomy, see this essay co-authored by Alex.¹⁷

This is qualitatively different from our work, which is focused on measuring people’s subjective responses to faces. As a consequence, our work has a clear use case with benefits that we see as compensating for some of the risks.

Identifying the motivating use case

This use case focuses on providing insight into the nature and origins of biases in face perception, which has the potential to help to reduce prejudice. An experiment on face perception of the kind that Stefan, Alex, and hundreds of other social psychologists run typically involves presenting people with images of faces and asking them to provide ratings or perform another related task. For example, if you wanted to know how a perceived property of faces influenced hiring decisions, you could run an experiment where participants act as hiring managers by seeing profiles that include photographs of fictional job applicants and evaluating their qualifications for the job. Analogous experiments manipulating the linguistic content of resumes are common in the social sciences and are a critical tool in studying and combating prejudice.²⁰ ²¹ ²² ²³ However, one serious bottleneck in conducting this research is the availability of appropriately calibrated images of faces (think of Stefan trying to find 200 images of people who vary in ethnicity but are perceived as being equally trustworthy).

Existing methods in social psychology are not sufficient to enable us to fully understand the nature and origins of prejudice resulting from face perception. Prior to the publication of our paper, much of what social psychologists knew about face perception came from using two broad classes of methods to generate the stimuli used in behavioral experiments.

One type of method uses images of actual people, often laboriously calibrated by hand. This is not just effortful, but comes with the ethical challenges associated with linking fictitious stories to real people’s faces and restrictions on the ways in which images can be used.

Another type of method uses software to generate synthetic faces. A productive software-based approach has been the use of 3D models of faces that were previously calibrated to specific psychological judgments, based on work by Alex in 2008.⁴ The faces generated by these models — which look like outdated video-game characters — have proven useful to the psychological community at large, and more than 4,400 users from 65 countries and more than 900 universities or research institutions have downloaded the resulting image databases. These stimuli have been used to study the roots and consequences of gender bias²⁴ and other deleterious stereotypes relating facial appearance to stable personality traits.³ ²⁵ For a general review of this literature, see Alex’s book.¹ However, they have major limitations. First, as should be obvious from Figure 1 below, the resulting faces are not especially realistic. Second, the faces are not diverse because the underlying 3D face model was based almost entirely on laser scans of white people’s faces²⁶. As a result, despite their important role in the study of social perception, these faces are of limited use in studies that try to realistically depict (and inclusively represent) racial diversity, limiting the kinds of questions that researchers can ask and answer. For any stimulus set used in psychology, neuroscience, economics, sociology, or political science, these are major weaknesses, especially in the context of addressing questions (and solving problems) related to racial bias.

Figure 1. Examples of faces produced by a popular technology for generating faces for psychological research. The perceived intensity of faces along the dimension of perceived trustworthiness increases from left to right in units of standard deviation (SD) from the mean.

Our work makes it possible to generate a large number of more realistic and more diverse images of faces calibrated to specific psychological judgments, potentially resulting in a big leap forward for studying face perception in general and racial bias in particular; this was the reason for the excitement expressed about the work in the Twitter thread. One result is a system that can map images of faces to predictions about how people will judge those faces. This system can also be used to modify images along a spectrum of values corresponding to those judgments. This is important for the kind of stimuli that are used in psychological experiments, where being able to continuously vary one feature while keeping others constant is a key part of experimental design.

Identifying misuses of the technology

However, such a system carries with it some significant risks. In particular, one such risk is that the system could be abused if used to modify images with the intent of deceiving people. Another is that it could be misapplied if used as a tool to automate physiognomy despite our intentions.

Assessing risks and benefits

When thinking about the ethical implications of this work, we did so in a framework where we considered the relative risks and benefits and then attempted to mitigate the risks as best we could. In this case, there are existing technologies that have related capacities, so we focused more on the incremental risk and benefits relative to the existing systems. In order to evaluate these incremental risks and benefits, it is important to understand the context in which this work was done and in particular what kinds of data and models already existed in the psychological literature.

Psychologists have been studying impressions of faces for decades and have collected such data at scale many times in the past. For example, one dataset collected impressions of 20 different perceived attributes in 2,222 faces in order to better understand what makes some faces easier to remember than others.²⁷ Another group used that same dataset to create a deep learning model to predict the superficial judgments people make about faces.²⁸ And a recent paper by the Psychological Science Accelerator collected over 3.2 million ratings by 11,570 participants across 41 countries to understand variability in first impressions across world regions.⁶ ⁷ These publicly available datasets and models predate our own and are similar to those created in our work. Our key innovation was to make it possible to generate realistic faces that will elicit specific subjective judgments, not just predict judgments from existing face images. As noted above, this is a critical step for being able to use these faces as controlled stimuli in behavioral research.

It is hard to overstate just how far synthetic face generation technology has come in the last few years. With the 2018 publication of StyleGAN (the basis for our model), which was trained on over 70,000 real face photographs²⁹, it is possible to rapidly generate a nearly infinite number of face images that are indistinguishable from real face photographs at first glance. Models such as StyleGAN have been distributed widely through GitHub and there are several face generation tools available that use related methods (e.g.,

This kind of synthetic face generation technology predates our own work, and has several recognized risks. Some of these risks are not merely academic; for example, synthetic face images are being used to bolster the realism of bots and sock puppet accounts spreading misinformation on social media platforms.³⁰ This is a problem we feel especially passionate about — as a graduate student, Stefan’s team won a prize for Best Hack to Counter Fake News at a hackathon. Luckily, researchers have been inventing ways to counteract this problem and there now exist accurate methods to detect when a face has been generated by common face-generation systems.³¹ ³² For faces generated with StyleGAN, you can even download a browser extension for Google Chrome that can accurately detect fake faces that you encounter while browsing the web.

We view our work as creating the first opportunity to study face perception at large scale using naturalistic images, with the added benefit that those images are not associated with real people. It will not be possible to truly understand human face perception without using naturalistic stimuli, and the most ethical way to produce such stimuli without resorting to images of real people is by using automated machine learning systems like StyleGAN. We think that understanding human face perception is an important goal, particularly given the positive effect it could have on addressing the nature and origin of prejudice based on first impressions. To us, this is the primary potential benefit of our work and we consider it substantial. Given that some of us have dedicated our careers to answering these questions, we acknowledge that we likely feel more strongly about them than the average person.

Mitigation steps and ethical licensing

Next we discuss the steps we took to mitigate the risks identified above:

With respect to mitigating the risk of using the tool for image manipulation, we note that because the image-generation component of our system is based on StyleGAN, it is straightforward to detect the modified images it generates using widely-available detection software. No further innovations are needed to identify images manipulated using our model.

With respect to mitigating risk associated with misapplication of the technology, our primary mitigation techniques were to limit the amount of information we released about the technology and to obtain a patent on it:

One approach to mitigation that has been pursued in previous AI research is to limit the information released about a system. In our case, this is in tension with the ideals of replicable and reproducible research that are furthered by the open-science practices that have been widely advocated for and adopted across the psychological sciences.³³ Additionally, the data that we collected on people’s impressions of artificially generated faces represent a substantial resource for future research on face perception. Though we were on the fence about whether to release these data, our reviewers at PNAS asked that we do so. Our dataset has similar content to existing datasets in the field, differing primarily in the number of ratings for each face and the fact that the faces are photorealistic yet synthetic. Given that the synthetic faces produced by our approach are easily detectable using existing software, we believed that releasing the images we used in data collection would represent minimal incremental risk relative to the many synthetic face images currently circulating online. However, we did not feel comfortable releasing the model used to generate and transform faces without a solid framework in place allowing us to limit its use only to researchers. Consequently, contrary to norms in the psychological sciences, we did not release the code for our model or the feature representations corresponding to these faces, which could be used to reconstruct the model. Having said this, building models similar to ours for face generation and manipulation has been possible for skilled machine learning researchers, using publicly available data and techniques, for several years now.

Our other mitigation effort was to patent the technology before the paper was released. The nature of a patent is to identify the possible uses of a technology,³⁴ and our patent indeed identifies the many ways in which this technology could be used. The fact that our institutions sought a patent has been interpreted by our critics as indicating intent to commercialize malicious uses of the technology. In fact, the opposite is true. The patent system is one of the few mechanisms available for restricting the commercial use of machine-learning technologies, and it is the only one available to us as individuals. By patenting the technology, and anticipating many possible uses of it, including problematic ones, we have placed full control over its use in the hands of two non-profit organizations with missions in the public interest (Princeton University and Stevens Institute of Technology). Though some will find little solace in this given the history of U.S. universities profiting from oppression³⁵ (see, for example, and, it’s worth considering that in practice the most likely alternative stewards of the technology would have been big tech companies, which are driven by a profit motive, or governments such as that of the United States, which to date has done little to regulate use of face-processing technologies.³⁶ ³⁷ For some evidence that this mitigation strategy has indeed produced the intended outcome, we note that, thus far, the universities have licensed the technology only to a museum with a free exhibit on the psychology of first impressions, the takeaway message of which is that these first impressions should not be trusted.

For a detailed discussion of the advantages and disadvantages to this “ethical licensing” approach to protecting against dual-use harms associated with research, see ref.³⁸ and for a broader analysis of how universities can best leverage socially responsible licensing practices, see ref.³⁹

Reflecting and looking forward

Reading responses to our work from a wide range of people has been eye-opening and a valuable experience that we are grateful for. We are not experts on AI ethics, but we do care about working responsibly at the intersection of AI and psychology. We are open to ideas about how to better mitigate the risks of such work as part of a broader conversation about how behavioral scientists can responsibly benefit from advances in AI. We see computational methods as being a key part of the future of the behavioral sciences and would love to chart a path toward this future together.

— Alex, Jordan, Josh, Stefan & Tom


1. Todorov, A. (2017). Face value: The irresistible influence of first impressions. Princeton University Press.

2. Jaeger, B., Evans, A. M., Stel, M., & van Beest, I. (2019). Lay beliefs in physiognomy explain overreliance on facial impressions [Preprint]. PsyArXiv.

3. Jaeger, B., Todorov, A. T., Evans, A. M., & van Beest, I. (2020). Can we reduce facial biases? Persistent effects of facial trustworthiness on sentencing decisions. Journal of Experimental Social Psychology, 90, 104004.

4. Oosterhof, N. N., & Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences, 105(32), 11087–11092.

5. Todorov, A., Dotsch, R., Porter, J. M., Oosterhof, N. N., & Falvello, V. B. (2013). Validation of data-driven computational models of social perception of faces. Emotion, 13(4), 724–738.

6. Jones, B. C., DeBruine, L. M., Flake, J. K., Liuzza, M. T., Antfolk, J., Arinze, N. C., Ndukaihe, I. L. G., Bloxsom, N. G., Lewis, S. C., Foroni, F., Willis, M. L., Cubillas, C. P., Vadillo, M. A., Turiegano, E., Gilead, M., Simchon, A., Saribay, S. A., Owsley, N. C., Jang, C., … Coles, N. A. (2021). To which world regions does the valence-dominance model of social perception apply? Nature Human Behaviour, 5(1), 159–169.

7. Todorov, A., & Oh, D. (2021). Chapter Four — The structure and perceptual basis of social judgments from faces. In B. Gawronski (Ed.), Advances in Experimental Social Psychology (Vol. 63, pp. 189–245). Academic Press.

8. Jaeger, B., Oud, B., Williams, T., Krumhuber, E. G., Fehr, E., & Engelmann, J. B. (in press). Can people detect the trustworthiness of strangers based on their facial appearance? Evolution and Human Behavior.

9. Olivola, C. Y., & Todorov, A. (2010). Fooled by first impressions? Reexamining the diagnostic value of appearance-based inferences. Journal of Experimental Social Psychology, 46(2), 315–324.

10. Todorov, A., Funk, F., & Olivola, C. Y. (2015). Response to Bonnefon et al.: Limited ‘kernels of truth’ in facial inferences. Trends in Cognitive Sciences, 19(8), 422–423.

11. Todorov, A., Olivola, C. Y., Dotsch, R., & Mende-Siedlecki, P. (2015). Social attributions from faces: Determinants, consequences, accuracy, and functional significance. Annual Review of Psychology, 66(1), 519–545.

12. Sutherland, C. A. M., Rhodes, G., & Young, A. W. (2017). Facial image manipulation: A tool for investigating social perception. Social Psychological and Personality Science, 8(5), 538–551.

13. Central Statistical Office. (2012). Trinidad and Tobago 2011 Population and Housing Census Demographic Report.

14. Williams, E. E. (1964). History of the people of Trinidad and Tobago. Praeger.

15. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp. 77–91).

16. Cook, R., & Over, H. (2021). Why is the literature on first impressions so focused on White faces? Royal Society Open Science, 8(9), 211146.

17. Agüera y Arcas, B., Mitchell, M., & Todorov, A. (2017). Physiognomy’s new clothes.

18. Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83.

19. Gray, M. L., & Suri, S. (2019). Ghost work: How to stop silicon valley from building a new global underclass. Houghton Mifflin Harcourt.

20. Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4), 991–1013.

21. Darolia, R., Koedel, C., Martorell, P., Wilson, K., & Perez-Arce, F. (2016). Race and gender effects on employer interest in job applicants: New evidence from a resume field experiment. Applied Economics Letters, 23(12), 853–856.

22. McConahay, J. B. (1983). Modern racism and modern discrimination: The effects of race, racial attitudes, and context on simulated hiring decisions. Personality and Social Psychology Bulletin, 9(4), 551–558.

23. Milkman, K. L., Akinola, M., & Chugh, D. (2015). What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations. Journal of Applied Psychology, 100(6), 1678–1712.

24. Oh, D., Buck, E. A., & Todorov, A. (2019). Revealing hidden gender biases in competence impressions of faces. Psychological Science, 30(1), 65–79.

25. Cogsdill, E. J., & Banaji, M. R. (2015). Face-trait inferences show robust child–adult agreement: Evidence from three types of faces. Journal of Experimental Social Psychology, 60, 150–156.

26. Blanz, V., & Vetter, T. (1999). A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques — SIGGRAPH ’99 (pp. 187–194).

27. Bainbridge, W. A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face photographs. Journal of Experimental Psychology: General, 142(4), 1323–1334.

28. Song, A., Linjie, L., Atalla, C., & Cottrell, G. (2017). Learning to see people like people: Predicting social impressions of faces. In Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 1094–1101).

29. Karras, T., Laine, S., & Aila, T. (2018). A style-based generator architecture for generative adversarial networks. ArXiv:1812.04948 [Cs, Stat].

30. Giansiracusa, N. (2021). How algorithms create and prevent fake news: Exploring the impacts of social media, deepfakes, GPT-3, and more. Apress.

31. Guarnera, L., Giudice, O., & Battiato, S. (2020). DeepFake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 2841–2850).

32. Wang, S.-Y., Wang, O., Zhang, R., Owens, A., & Efros, A. A. (2020). CNN-generated images are surprisingly easy to spot… for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 8692–8701).

33. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S., Breckler, S., Buck, S., Chambers, C., Chin, G., Christensen, G., & others. (2016). Transparency and openness promotion (TOP) guidelines.

34. General information concerning patents. (n.d.). [Text]. Retrieved April 30, 2022, from

35. Wilder, C. S. (2013). Ebony and ivy: Race, slavery, and the troubled history of America’s universities. Bloomsbury Publishing USA.

36. Learned-Miller, E., Buolamwini, J., Ordóñez, V., & Morgenstern, J. (2020). Facial recognition technologies in the WILD.

37. Barrett, L. (2020). Ban facial recognition technologies for children and for everyone else. BUJ Sci. & Tech. L., 26, 223–285.

38. Guerrini, C. J., Curnutte, M. A., Sherkow, J. S., & Scott, C. T. (2017). The rise of the ethical license. Nature Biotechnology, 35(1), 22–24.

39. Mimura, C. (2010). Nuanced management of IP rights: Shaping industry-university relationships to promote social impact. In R. Dreyfuss, H. First, & D. Zimmerman (Eds.), Working within the boundaries of intellectual property. Oxford University Press. Available at SSRN: