Academia Should Learn from Software Development

To become a truly collective endeavor, empirical research should become more like software development.

The nature of empirical research varies a lot between different academic disciplines. Scholarly specialization accounts for much of the tremendous advances in modern science, but it also hinders collaboration as researchers find it difficult to interoperate their different ways of studying the world. To make empirical research a truly collective endeavor across disciplines, academia should find ways to establish similar arrangements and models to software development.

One might argue that modern science is too complex and specialized for there to be any hope of empirical research processes to escape their disciplinary confinement. Yet, we are perfectly capable of dealing with highly complex objects of research. Breakthroughs in management and engineering disciplines have made it possible to produce incredibly complex products that sounded science fiction only a few decades ago. Complexity itself is not a problem but the way we as researchers (do not) manage it in empirical research.

The way empirical research arranges itself in many fields ignores the most important principle in the design of complex systems, that is, modularity (Baldwin and Clark, 1997; 2000; Schilling 2000).

The contrast to software development, which is undoubtedly one of the most advanced industries in managing the production of complex and constantly evolving products, is stark. There is nothing like APIs, package managers, version control and source code management systems in empirical research. Software development and empirical research are not the same thing, but they have many similarities. Most importantly, they both mostly deal with text and numerical data. Scientific research is currently modular only at the level of final, compiled products. We build on and contribute to each others work through the system of scholarly publishing, but below the level of published papers idiosyncratic practices take over.

A key lesson from software development is that it is possible to produce highly sophisticated outputs by bringing together a vast number of components produced by many uncoordinated contributors. The emphasis is here on ‘uncoordinated’. No sensible researcher will subordinate his or her research to a central authority. Software development achieves this by highly developed tool environments and associated practices, which together constitute a basic infrastructure for any serious software development effort. Would such a modularizing infrastructure be possible in research?

Empirical research means the production of a posteriori knowledge, that is, justifying knowledge claims by reference to observation. A necessary part of any empirical study is a process that starts from acquiring, simulating or experimentally generating data about a phenomenon of interest and then proceeds by performing analytical operations with the data. Regardless of the methods and subject matter of the study, the process could be conceived as a chain of research operations with specific inputs and outputs. Currently, we lack generic tools to describe, model and manage these practical steps as distinct, well-defined research operations.

Graph!

Breaking down empirical research into modular components and their relationships would make it possible to grasp research processes even if you do not understand everything that happens inside each research operation. The overall process could be, for instance, modelled as a graph in a similar fashion to software versioning. This would help researchers to think more clearly about their practices and to develop information systems that offload deadly boring administrative work to digital research infrastructures. The latter is particularly important, since another lesson from software development is that doing things right should also mean doing things easier.

It took half a century for software development infrastructure to reach a level on which we can literally stand on the shoulders of giants and glue powerful software together from components developed by others. All that infrastructure and mental models exist in software development, and it is ready to be ported to academic research — from where is largely emerged anyway.

Finally, some of my colleagues may find such a vision of modular empirical research philosophically, methodologically or culturally objectionable. I agree that there are risks in any attempt to de facto standardise any, even the most mundane aspects of research. At the same time, massive opportunities associated with a new way doing empirical research probably justify facing those risks.

A modular empirical research process would be much more easy to inspect and replicate by others, which could easily lead to better quality control and less duplicate research. A single, well-executed research operation, say dataset extraction and cleansing, could naturally become part of multiple studies in different fields; reviewers could run their own robustness checks in addition those provided by the authors of a study, and it would be possible to distribute much more granular credit for academic work. Overall, modular systems tend to evolve faster than a non-modular system of comparable size (Simon, 2002).

This is a slightly extended version of the talk I gave at the Alan Turing Institute Symposium on Reproducibility for Data-Intensive Research, 6 April 2016, at the Dickson Poon China Centre, St Hugh’s College, Oxford.

References

Baldwin, C. Y. & Clark, K. B., 1997. Managing in an age of modularity. Harvard Business Review, 75(5), pp. 84–93.

Baldwin, C. Y. & Clark, K. B., 2000. Design rules: the power of modularity. Cambridge, MA: The MIT Press.

Schilling, M. A., 2000. Toward a general modular systems theory and its application to interfirm product modularity. Academy of Management Review, 25(2), pp. 312–334.

Simon, H. A. 2002. Near decomposability and the speed of evolution. Industrial and Corporate Change, 11(3), pp. 587–599.