Reductionism and Software Engineering: Understanding the gap between interactive systems users and designers

Reductionism could be viewed as an essential theoretical component of modern natural sciences such as biology or genetics. Although reductionist approaches work very well as long as “natural” phenomena are studied, things become more difficult when human beings are involved as the object of study. Let us try and see how this notion of reductionism could be briefly defined, and then try to apply this definition to software engineering.

Reductionism could be understood as the process through which the observed behaviour of a complex system is defined and explained as the product of the operation (or interaction) of its identified components. In neurobiology for instance, a nice example could be the way the notion of “addiction” is tackled from a neurobiological/behaviouristic point of view. A particular behaviour or phenomenon, such as for example the act of taking drugs, is observed and defined, and explanations to this phenomenon are sought on a psychological (cognitive, behaviouristic) level, and further down on a neurological level and a molecular level. Technological advances such as fMRI, or PET scan now allow us to be able to observe changes in the brain as they happen during the display of a particular behaviour, allowing researchers to establish causal relationships between the observation of a particular behaviour and what happens in the brain of the person being observed.

However, the problem with such an approach is the fact that things can get quite complicated as soon as a human observes the behaviour of another human. Poststructuralist theory for example emphasises the multiplicity and historical, cultural and social contingency of human perspectives: As the way we make sense of the world is shaped by culture and our own set of values, the meaning of notions such as for instance “addiction” becomes slippery.

So let us try to apply all this to software engineering. What happens if we now take interactive systems as our object of study? To be quick, an interactive system could be described as made up of a combination of software and hardware, both of which can perform a number of basic operations. These operations are combined in a certain way so as to provide particular functionalities to the system’s users. Can these functionalities be purely described as the particular combination of these basic operations? As I’ve tried to argue in previous posts, it could be useful to consider the meaning of the behaviour of an interactive system as emerging in the framework of its interaction with its users. It is therefore not fixed in the particular combination of its ‘logical’ elements, but is dynamically created through the actual use of the system by its users. This is where a rift between the meaning of the system for its designers, who focus on the combination of the logical elements composing the system and this meaning for the user, which is dependant on his or her own system of representations. Can reductionism therefore provide a conceptual tool for better understanding the gap that exists between system designers and their users?