Learning Analytics: Facing the Critics

Emma Bergh
Eduflows
Published in
3 min readOct 28, 2019

In the corporate world, analytics are increasingly being used to inform decision making. Credited with optimising business performance, analytics have begun to infiltrate education in the form of Learning Analytics (LA). While for some the advent of LA has been welcome, for others, it raises contentious questions. Do analytics have any place in education? How relevant is the data that is generated? In this blog post, I will touch on these issues and conclude with a brief discussion as to how reliable research may help to dispel these concerns.

Firstly, it is clear that a number of commentators have misgivings about the relevance of analytics to the educational domain. Whereas the aim of analytics is to render the complex simple, many would argue that education is an inherently complex endeavour. For example, skill development is generally thought to occur through a complex interplay of factors including social, physical and/or virtual contexts (Lodge & Lewis, 2012). Conversely, LA adopts a reductionist approach to skill measurement by isolating individual skills independent of the social context in which they occur. In this respect, Roberts-Mahoney, Means & Garrison (2016) cite the example of critical thinking. Traditionally assessed through real-life interactions in context-rich settings, critical thinking in LA is measured by a discrete set of quantifiable behaviours. While students may demonstrate individual behaviours in an online environment, it is debatable whether these skills can be integrated in an analogue setting.

Drilling deeper, the very data that LA produces is controversial. What exactly is being measured? How accurate are these data? Who is responsible for generating the algorithms? Defined by O’Neill (2016) as “an opinion formalized in code” (p.53), algorithms are inherently subjective and reflect the judgements and priorities of their creators. For example, in calculating student’s mastery levels, one LMS allows teachers to choose between an achievement or a task completion model. In the former, students demonstrate their mastery of a topic based on their assessment scores, whereas in the latter, students are required to complete all lesson activities in order to demonstrate mastery. In both cases mastery is claimed, with no indication as to how this was determined.

In order to dispel concerns such as those outlined above, it is important to conduct reliable research. In this respect, Dringus (2012) proposes five key requirements in addressing the potentially harmful and unreliable application of LA:

  • Meaningful data
  • Transparent data
  • Justifiable algorithms
  • Responsible assessment
  • Feedback on practice

I envisage the above criteria forming the basis for a practitioner-informed design-based research (DBR) inquiry cycle. Teachers will reflect on the relevance and application of available LA data by asking questions such as “to what extent is this data meaningful?” “How does it link to my practice?”

Finally, I aim to ensure the internal and external reliability of my study in the following ways. Firstly, the internal reliability of my study is the extent to which the data has been independently collected and analysed. This can be achieved by member checking and triangulation of data. During the analysis phase, I will also invite peer review as well as external audits. External reliability is ensured when the findings of the study depend on “subjects and conditions, and not on the researcher” (Bakker, 2014, p.26). It is closely related to the notion of trackability and means that all stages of the study need to be carefully documented so that the reader can “track the learning process of the researchers and … reconstruct their study” (p.26).

References

Bakker, A. (2014). An introduction to design-based research with an example from statistics education. In A.Bikner-Ahsbahs, C. Knipping., & N. Presmeg (Eds.), Doing qualitative research: methodology and methods in mathematics education. New York: Springer

Dringus, L. (2012). Learning analytics considered harmful. Journal of Asynchronous Learning Network, 16(3), 87–100. doi: 10.24059/olj.v16i3.272

Lodge, J., & Lewis, M. (2012, November). Pigeon pecks and mouse clicks: Putting the learning back into learning analytics. Paper presented at the 29th annual ascilite conference, Future Challenges, Sustainable Futures, Wellington, New Zealand. Retrieved from http://www.ascilite.org/conferences/Wellington12/2012/images/custom/lodge%2C_jason_-_pigeon_pecks.pdf

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First ed.). New York: Crown.

Roberts-Mahoney, H., Means, A.J., & Garrison, M.J. (2016). Netflixing human capital development: Personalized learning technology and the corporatization of K-12 education. Journal of Education Policy, 31(4), 405–420. doi: 10.1080/02680939.2015.1132774

--

--