Variety is the spice of life

By Mikaël Akimowicz

January 29, 2017


… and so it goes for research methods in social sciences. A long-standing debate has divided the social sciences research community between proponents of qualitative approaches and proponents of quantitative approaches. In this post, I would like to highlight why the Rural Manitoba Broadband project relies on mixing both approaches. In short, using both approaches can result in mitigating the weaknesses of one approach with the strengths of the other.

Quantitative research relies on the assumption that there exists a unique truth that is to be found out. In this approach, researchers are neutral and objective observers, who report measurements, potentially use them to validate assumptions, and eventually strengthen the plausibility of general laws.

The number of observations and the way measurements are carried out are therefore important features for assessing research results. Assumptions should be falsifiable — i.e., for each assumption, there should exist a measurement that can validate the assumption. Additionally, research protocols should be replicable; since truth is unique, context does not matter: same causes provoke similar results.

Qualitative research, on the other hand, relies on the assumption that context matters: social backgrounds constrain the way each individual interprets information and, therefore, researchers cannot be neutral and objective. Additionally, human beings are strategic. Social interactions are likely to modify one’s behavior.

Answers can be adjusted to what one thinks could be a good answer that would bring benefits. And, even if qualitative researchers also rely on assumptions and design protocols, they are aware that they draw conclusions from specific contexts. Their goal is to define ideal-types which can be used for comparisons.

Where does this divide come from?

During the 18th century, natural scientists, classified as positivists, used induction to validate their assumptions. For instance, if one wanted to prove that swans are white, one would observe swans. If one observed a swan that was white, that observation would confirm that swans are white. The more white swans one saw, the more likely the assumption was. Unless one saw a swan that was not white, one would tell the assumption was valid. This is the confirmation approach to validation.

Image from Pixabay

Social sciences started to question this approach. Their main issue concerned consciousness: can self-conscious individuals report social facts as objectively and neutrally as positivists claimed they did? Can social interactions be neutral? Can the interpretation of social behaviors be objective? Social scientists departed from the positivist stance and started to develop their own approach to science.

Aware of the flaw in their hard-line stance on confirmation, positivists modified their position. Indeed, they realized confirmation cannot validate any assumption: the assumption that swans are white can be validated only if every single swan (past, present, and future) is observed and found out to be white, which is in practice not possible.

Image from Pixabay

To override this weakness, K. Popper defined the concept of falsifiability: assumptions have to be stated in a way that they can be challenged through experimentation. Since then, positivists have adopted the hypothetico-deductive model, a stance which implies that theory-driven assumptions are tested empirically using observable data. If the test does not contradict the predictions deduced from the assumption, then the assumption is validated until it is falsified.

Only relatively recently have qualitative and quantitative approaches been reconciled by the proponents of mixed-methods research. This community of researchers defend that the weaknesses of one approach can be mitigated by the strengths of the other.

Indeed, quantitative results are usually supported by a large number of observations, which permit researchers to make generalizations. Unfortunately, quantitative results can only shed light on phenomena, which causes can be measured one way or another.

For instance, the assumption that happy workers are more efficient can hardly be tested: although efficiency can be measured, happiness can only be measured by a proxy (a variable that will approximate workers’ happiness).

On the other hand, qualitative data capture interviewees’ rationales and personal testimonies. It results in a deeper understanding of complex phenomena. Unfortunately, collecting such testimonies is costly: it requires much more resources such as time and money. Also, the few number of cases analyzed hinders generalization.

Why does it matter for our research?

The Manitoba Rural Broadband project relies on mixing both approaches. On the one hand, quantitative data is collected to describe digital infrastructure, skills, utilization, and affordability for each community. We use the tool called E-Index, which was developed by Function Four Ltd. It assesses a community’s digital background for seven technologies: computer, fax, fixed phone, internet, mobile phone, radio, and television.

For each technology, a pool of tasks, ranging from basic to complex, are assessed. Finally, a grade is calculated. On the other hand, qualitative data is collected to understand the opportunities, challenges, and risks that are associated with the use of digital technologies.

For this research, qualitative data is collected during focus groups, discussion groups that permit to blend stakeholders’ points of views, and interviews. The joint analysis of these two sources of data will enable us to improve the breadth and depth of analyses and to design more robust recommendations.

To learn more about the Manitoba Rural Broadband Project, click here.

To learn more about RDI’s other rural research initiatives, visit us at https://www.brandonu.ca/rdi/.