The Unpredictable Art of Science — and a Tentative Manifesto to Foster It

As a species, our ability to observe complex behaviours, to rationalise them into predictable rules and to make decisions based on this evidence for the betterment of society has served us well. Formalization of the processes involved in making connections between what we see and what tends to happen as a result allowed the ancient Greeks to evolve most of the fundamental principles of science. Civil society quickly recognized the value of the products of these efforts and set aside significant resources, even while most of the population was uneducated and therefore unable to understand the basis of new technologies. Over time, successful civilizations came to share a common appreciation of the benefits of scientific endeavour and the profession of science became an essential element of modern economies. Over the past two centuries, in particular, there has been remarkable progress in health, communication technology and our built environment. Most of these brilliant applications, however, were preceded by equally startling discoveries that, at the time of their invention, held no inkling of the impact they would ultimately achieve.

As increasing resources came to be invested into scientific research, funders (largely governments) required and developed science policies and processes to monitor performance of the sector. Large-scale engineering projects such as the Apollo space program and, later, the human genome project couldn’t have been achieved without careful management and open-ended extrapolation. Often the technologies and materials required for actually achieving the goal either didn’t exist at the time or were yet to be fully developed. The most advanced DNA sequencing machines employed in the latter stages of the genome project (circa 2002) are now quaint museum pieces — their successors can, in an afternoon, achieve the same volume of data acquisition as years of intensive effort by thousands of technicians — at a tiny fraction of the cost.

It is also true that the most impactful scientific advances are often those whose value is typically unappreciated at the time, emerge from accidental findings and can be the work of a few individuals. Much of modern science isn’t done via a coordinated effort. With the exception of high-energy physics and astronomy (think Large Hadron Collider, International Space Station), most advances are made by small teams of scientists. Indeed, major scientific prizes are usually awarded to very small numbers of individuals. The rules of the Nobel Prize for Physiology and Medicine, for example, forces this restriction. But 100 years after the prize was first awarded, the fact that the rule remains indicates it is serving its purpose. In fact, transformative discoveries are still made largely by individuals, typically working with small groups of trainees, and their insights are often difficult to spot at the time. Rough diamonds pop up from within the constant flow of research data, are assimilated into new thinking and achieve amplified and recognizable impact.

Following World War 2, growth of the public and private scientific sectors accelerated, driven by increases in funds available via the increasing wealth afforded by improvements in quality of life. This was a virtuous circle but science became more expensive and the number of scientists increased until the inevitable point was reached where funds available failed to keep pace with growth. While there were tough patches in the nineteen eighties and nineties, the past 10 years have seen unprecedented stagnation. As a result, the international scientific enterprise has entered a new phase. While the choice of who was supported for what research projects has long been determined by rigorous peer review, slowing of funding placed huge pressures on this adjudication system. For example, decade or so ago, one in four or five research proposals were funded by the Canadian Institutes for Health Research (CIHR), a level comparable to the much larger National Institutes of Health in the US. This “success rate” gradually reduced as a bigger volume of projects was submitted by an increasing number of scientists. This became a vicious cycle as reductions in chances of funding caused increased applications. In July, the most recent competition of CIHR had the lowest success rate in its history, 13%.

This is not surprising. Governments cannot simply continue to invest in more and more research. There has to be some steady-state level. As mentioned, the mid-1990s saw major cuts in science investment as the government of Canada sought to reign in the deficit. This was followed by significant increases and formation of new funding entities such as Genome Canada (genomics research), the Canada Research Chairs (salary support) and the Canadian Foundation for Innovation (equipment support). Few scientists today anticipate a repeat of that period, despite our dependence on science never being so great. Rather, they worry about changes in how science is adjudicated and what science is actually done.

Facing the reality of slowly eroding spending power on research, funders are looking to more efficiently extract value from their investments. In Canada, there have been shifts in the balance of research from basic/discovery science to applied research as illustrated by changes in the mandate of the National Research Council and ear-marking of new funds to strategic programs. Whether these are wise changes is debatable. The optimal balance of discovery vs applied or open ideas vs top-down strategic support is simply unknown. What is clear is that one doesn’t work without the other. Stopping discovery investment would soon lead to a dearth in new findings to develop. Likewise, it is the application of science that generates wealth for governments to re-invest in new discovery science. The positive or negative impact of the Canadian shift towards more applied research at the expense of basic science will not be clear for some time. As an aside, this balance is ideological but not necessarily “right wing”. In the US, there is bipartisan support of government funding of basic science. On the left, it is seen as progressive investment in future quality of life. On the right, basic science which has no clear pay-off is seen as too risky for business and so should be funded by government such that businesses can then become involved when applications emerge.

In addition to changes in what type of science is done, there have also been changes in how it is managed. In the face of shrivelling funding rates, agencies have introduced new processes to adjudicate research. Perhaps the most substantial changes (worldwide!) have been rolled out by CIHR. Their ambitious plans received very mixed reactions, in part due to the changes being implemented across the board and within ever tightening funding constraints. The most controversial change was a switch away from scientists meeting to discuss applications to system of virtual review where they submitted their opinions from their office or home. Up to 5 reviewers were tasked with reviewing each new research proposal and scores tabulated to arrive at an overall rank order of “enthusiasm”. In addition, grant applications became more rigidly defined and reviewers were instructed what to assess. In essence, scientists were forced to think in a particular manner (within a precise character count) and to only include material deemed important by the agency. Information was limited ostensibly to allow reviewers to spend less time evaluating more applications.

Pretty much everything that could go wrong with the new process did and the result was wide-scale loss of confidence in the adjudication process — the most important aspect of a science funding agency. Since then, a number of hasty changes have been made which should, over time, improve effectiveness. But scientific review is a fragile and imprecise process. As described above, identifying novel and interesting research ideas is very difficult and even the best run systems make errors — adjudicators are human. The most daring or original ideas maybe criticized, discarded or seen as too high risk. This becomes ever more likely when acceptance rates drop — and the next competitions being run by CIHR are likely to have single-digit success levels. Less obvious is the result of over zealous constraint of how a research proposal is presented. This promotes cookie-cutter science that fits into preconceived ideas. This is a certain path to incremental, slow, boring research.

So how might we protect and nurture the best and most promising science and scientists? We must start by recognizing that science is inherently unpredictable and therefore usual controls and oversight don’t work well and can be counter-productive. The more we try to mould science, the less effective it becomes. We also know that giving scientists too much latitude can reduce their competitiveness, that science is expensive and resources limited, so must be carefully expended. We also know that while no scientist should expect a job guarantee, when they’re performing, they should not be constantly worrying about losing funding. This creates inefficient churn and disruption and leads to shorter-term thinking.

So what are the solutions? The following ideas might help support more creative science, and hence, better value from our science:

1. The supply/demand of scientists is broken. Training takes too long, career progression is too precarious and incredibly talented people are giving up on science. We should aim to reduce PhD and postdoctoral training periods and provide a framework for career advancement. Science is becoming more technical yet we under-appreciate research technicians. Scientific thinking is valuable in many jobs that are not overtly scientific. Increase opportunities for involvement of the scientifically trained in non-traditional careers.

2. We need the best scientific brains yet there is a trend towards less support for young and mid-career researchers and there is still an increase in sex imbalance as seniority increases. Adjust competitive funding to maintain equivalency of success rates across career stages and gender. It makes no sense to starve younger scientists and such equity adjustments would go far in stabilizing a scientific career structure.

3. The fragility of funding causes inefficient, staccato patterns of progression. Support a base scientific funding level that is sufficient for a researcher to maintain activity as judged by means other than counting papers. This base can be added to by those who compete for additional funds. If a researcher is assessed as not productive, they should be ineligible for base (or other) funding.

4. The Federal government has at least 20 funding systems for just health related research. Simplify these government funding vehicles by consolidation. Here be savings, efficiencies and increased coordination!

5. Diversity of thought is the wellspring of originality. Are we penalizing novelty and non-conformity in our children? Every child deserves exposure to the wonders of science. Restore the importance of science to the core curriculum.

6. There is no perfect way to conduct science. Support new models and approaches such as the Janelia Farm model of the Howard Hughes Medical Institute. The Perimeter Institute is a Canadian example of a new model for theoretical physics. Why not other areas of science?

7. Metrics can be useful when used longitudinally, but are easily gamed and can drive perverse incentives. Use sparingly and involve scientists at all career stages in assessments and planning. Encourage appropriate risk.

8. Assessment of a scientists’performance is now done largely by proxy. Journal impact factors are lazy substitutes for reading actual papers. Scientific articles are still the primary currency of science. This has worked well but as science changes, so must the outputs and these should be recognized (don’t get me started on the broken science publishing system). There is plenty of room for innovation in this space.

9. Competition for science funding isn’t a level playing field. Larger scientific centres have inherent advantages leading to geographical distortions. Rather than handicap the successful (competition is global), we must ensure support of disadvantaged centres through regional partnerships to sustain capacity and expertise.

This may all sound naïve and entitled. Why shouldn’t scientists be subject to the same rules and regulations and expectations as everyone else? The argument is simply that such restraints can reduce potential value and returns. Indeed, science has a very specific limitation — exerted through the amount of resource society is willing to dedicate to it. Before arguing whether the amount of that investment is correct, we need to ensure the resource we have is being optimally allocated for the betterment of society. I would argue that if we continue on the current path of adding ever tighter controls and conformities to research without understanding their effects on the impact and quality of that research, then we will likely be wasting money.