RESPONSIBILITY

Measuring Cultures of Responsibility in the Life Sciences

It’s time to critically assess the ways that we attempt to measure progress towards responsible conduct in the life sciences.

Daniel Greene
Bioeconomy.XYZ

--

The promise and the problem

In early 2020, the National Academies of Science, Engineering, and Medicine released a new report on “Safeguarding the Bioeconomy”, defining the bioeconomy as “economic activity that is driven by research and innovation in the life sciences and biotechnology, and that is enabled by technological advances in engineering and in computing and information sciences.” The current US bioeconomy is valued at close to $1 trillion dollars and is expected to continue to grow rapidly and to make profound contributions in fields like healthcare, agriculture, energy, and industrial production. These are some of the benefits that we can hope for from continued life-science research.

But life-science research is “dual-use”, meaning in this context that the same knowledge and tools that can be used to create massive benefits can also be used to create massive harms. Life scientists need to grapple with safety concerns around lab accidents, and they also need to secure their physical and virtual spaces against theft or misuse.

In addition, the information that scientists are producing might itself constitute a risk. Publicly-available life-science research could enable people to create novel bioweapons using increasingly accessible tools and techniques. For example, the entire genome sequence of smallpox has been published online. This knowledge could enable bad actors to reconstruct smallpox, but it could also enable life scientists to more quickly develop better vaccines. On this general topic, about a week after the publishing of the Bioeconomy report, the National Science Advisory Board for Biosecurity (NSABB) hosted its first publicized meeting since 2017, where they discussed some of the tensions between security and public transparency in life-science research.

A culture of responsibility?

So how can the benefits of dual-use research be preserved and the risks minimized? Here’s one common answer, from the NSABB’s “Proposed Framework for the Oversight of Dual-Use Life Sciences Research” in 2007:

“The NSABB strongly believes that one of the best ways to address concerns regarding dual use research is to raise awareness of dual use research issues and strengthen the culture of responsibility within the scientific community. The stakes are high for public health, national security and the vitality of the life sciences research enterprise.”

The idea is, roughly, that life scientists could manage themselves by cultivating an internal set of cultural norms and expectations around how to wisely perform dual-use research. Many other groups in government and academia have come to similar conclusions, including multiple National Research Council reports. A culture of responsibility in the life sciences is widely seen as important for mitigating the risks and preserving the benefits of biotechnologies. In fact, the US Department of Homeland Security has recently funded the Engineering Biology Research Consortium to do culture-of-responsibility training in life-science laboratories nationwide.

Staff members in a BSL-4 laboratory. In order to safeguard the present and future benefits of the bioeconomy, life scientists need to adopt norms and expectations around how to wisely perform dual-use research.

Measuring a culture of responsibility

As a social scientist interested in safeguarding the bioeconomy, here is my core question:

How would we observe a culture of responsibility in practice, or know if we had one?

What are the elements of a culture of responsibility, and what are some meaningful metrics that we could use to indicate the presence of those elements and ultimately to guide the development of programs and interventions?

Over the last several months, I have been reviewing the literature on how programs to create a culture of responsibility are conceptualized and assessed. What I have found so far is that unfortunately, over a decade after the 2007 NSABB report, there’s not much assessment going on. A 2018 review by Perkins et al. looked at 326 articles on the subject and summarized the situation as follows:

“Of the many interventions that might be used to improve the culture of biosafety and biosecurity, educational and training interventions are among the most frequently employed or cited. Unfortunately, there has been little assessment of these interventions specifically directed at improving biosafety and biosecurity in laboratories.”

Most training programs that I have seen have no assessment component at all, and the conceptualizations and assessments that do exist are often lacking in rigor. Let me share a quick example.

An example: Minehata et al. (2010)

A program described in 2010 by Minehata and colleagues sought to cultivate a culture of responsibility among medical students in Japan through a five-day course that has now been integrated into existing medical school syllabi. The program was evaluated simply by asking participants whether they agreed that their “understanding was developed” on various topics on a 1 to 5 scale, such as “Life science and ethics”, “Intellectual property”, and the “surrounding situation of scientists and scientific papers”. The average response was (perhaps unsurprisingly) between a 4 and 5 on all topics, and almost exactly the same across topics.

Unfortunately, the results were deeply flawed for at least three reasons:

  1. The topics were not nearly specific enough for people to provide nuanced answers. For example, it might be hard to summarize your understanding of the “surrounding situation of scientists and scientific papers” with a single number.
  2. The survey questions only asked if any understanding was developed or not, but they provided a five-point scale of agreement. This simultaneously violates two principles of survey design: it uses an agree-disagree scale, which tends to tilt people towards an agreement by default, and it maps a binary question to a multi-point response field.
  3. Finally and perhaps most importantly, the questions are very likely subject to social desirability bias. The subjects may have been trying to please the person doing the assessment because they don’t want to criticize the course too harshly.

In addition to methodological problems, the deeper issue with the program is that it assumed that “understanding” is sufficient for a culture of responsibility and subsequent behavior change. The underlying assumption here might be something like: “If these students intellectually understand that dual-use research is important, then they will act appropriately by spontaneously self-organizing into a culture that wisely self-regulates its research practice.”

Unfortunately, decades of research suggest that intellectual understanding is often not enough to change culture or behavior. Huge bodies of literature describe the importance of elements like leadership, social norms, environmental affordances, and incentives, not just instruction, on real-world behaviors and decisions.

Defining our goals

Keep in mind as well that this program is exceptional for having an assessment. Most “culture of responsibility”-related programs don’t even appear to do any assessment at all. This contributes to conceptual confusion around the goal or purpose of these programs — of what exactly educators are trying to achieve:

  • Is the goal to “raise awareness”, as Minehata and colleagues described their program? This goal implies a fairly low bar of merely alerting life scientists to the existence of various issues.
  • Is it to provide “training” or impart some definable knowledge or skills? If so, what knowledge or skills, and what evidence is there that these skills are both lacking and important?
  • Or is the goal to change “culture”, “norms”, or “engagement”? These terms are not interchangeable; they imply different unstated and overlapping theories about what will cause life scientists to actually change their behavior in ways that reduce risks.

I think that this is an area where social scientists have a lot of potential value to add to biorisk management, biotechnology research, and the development of the bioeconomy. I encourage anyone interested in the topic to reach out to learn more.

About the author

Daniel Greene is a postdoctoral researcher and fellow at the Center for International Security and Cooperation at Stanford University. He uses a combination of data science, survey research, policy analysis, and qualitative methods to help us understand our collective options for regulating synthetic biology. Daniel has a PhD in Education from Stanford.

--

--

Daniel Greene
Bioeconomy.XYZ

Biosecurity researcher, social scientist, fragile blob of atoms, baffled river of experience. This blog is for personal musings. www.danielgreene.net