On Learning analytics

Firstly, it is important to clarify, what exactly learning analytics is.

Learning analytics:

  • “can be summarized as the collection,
    analysis, and application of data accumulated to assess the behavior of educational communities.” (p. 1)
  • “gives all stakeholders insight into what is taking place from Day 1 to Day X of a given class irrespective of the type of activity taking place.” (p. 2)
  • “encompasses a range of cutting-edge educational technologies, methods, models, techniques, algorithms, and best practices that provide all members of an institution’s community with a window into what actually takes place over the trajectory of a student’s learning.” (p. 2)
  • “ideally attempts to leverage data to provide insight into the
    activities taking place within the classroom.” (p. 2)

In a nutshell: learning analytics deals with the problem of how to effectively support teaching and learning with technology, which methods to use and how to effectively analyze their use in their complexity.

Furthermore, I would like to pursue with chapter 8 of the lecture: “Identifying Points for Pedagogical Intervention Based on Student Writing: Two Case Studies for the ‘Point of Originality’” (p. 157–190). This chapter deals methodologically with the concept called “Point of Originality.” What does the word original mean in this concept? “It is this ability to put concepts into one’s own words, discovering more ‘original’ expression of the same concepts, that is meant by the term ‘originality’ in the Point of Originality’s name.” (p. 162) Point of Originality is then a computational method, “which measures a student’s ability to put key course concepts into his or her own words as a course progresses.” (p. 10)

Brandon White and Johann Ari Larusson deal with the problem of large gateway courses and the evaluation of all the enrolled students in them. These courses have a lot of cons because of the high number of students and the negative impact is to be seen on students and instructors alike. The most prolific problems are: do the students understand course materials and can a large number of attendees have a bad impact on their learning process?

The best way to evaluate students’ performance in courses in its complexity is probably to evaluate students writing. The qualitative assessment is, however, in those so-called gateway courses impossible. The more feasible way would be to evaluate those writings quantitatively. That is not possible without developing a metric system for students’ writings evaluation. The goal is then to substitute the qualitative assessment with a quantitative one that can be preferably visualized on a timeline (activity during the whole course).

The authors describe in two case studies how they managed to do it having used (among others — Table 8.2) a linguistic tool called WordNet.

WordNet® is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the browser. WordNet is also freely and publicly available for download. WordNet’s structure makes it a useful tool for computational linguistics and natural language processing.

I personally find it astonishing that someone came up with the idea to use computational linguistics to assess course writings. There are many other great corpus linguistics alternatives (my field of study) that could theoretically be used in automatic assessing of any written text, namely e.g. AntConc by Laurence Anthony (keyword lists, collocates, concordance plot). It could even be used to see more writings or the original course materials and compare them simultaneously to one another (AntPConc). Another great tool could also be the Centering Resonance Analysis (CRA, analyzing text frames) and the software called Visone:

Centering Resonance Analysis (CRA) extracts a network from a text by analysing its centers, for which the Centering Theory states that they contain the main contents of the text. According to Centering Theory, these centers are the Noun Phrases (NPs) of a text, that is the nouns together with any modifiers belonging to them. Thus, words within these centers define the words within the CRA text network, and the way they occur in the text can cause links between them.

The thing I personally find the most problematic is that I am not completely sure if the qualitative methods of evaluation (in any course) are the right way to go. The students should get an appropriate feedback to their writings. Are those visualizations of interlinked synsets the right way? Aren’t the students expecting more than a few numbers that they obviously cannot interpret themselves in a simple, reasonable way (to be honest, I have been pretty lost reading those algorithmic equations myself and I do not have any problem with words like conceptual-semantic relations or noun phrases)? Is it not too abstract to be considered constructive criticism? Is not/should not it be in the end the main aim of any course that the student gets more than a numerical evaluation of his participation and his accomplishments?

Larusson, Johann Ari; White, Brandon (2014): Learning Analytics. New York, NY: Springer New York.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Petr Kuthan

Doctoral researcher of linguistics at @muni_cz and @uni_WUE, interested in German, discourse, place making, media, and terrorism. Editor of @Vednemesicnik.