Misadventures in Teaching with Technology: First xAPI Reports

Katrina
4 min readOct 10, 2016

--

When I first began this foray into collecting data via xAPI, I wasn’t really sure what I should target. After all, I’ve never been able to collect data at this level before, so part of my problem was that I didn’t know what I didn’t know. I still don’t really know what I don’t know, but I have at least a more general idea.

So my initial “strategy” has been to try to collect data that I feel will help me determine whether students have interacted with my online content. By “interacted,” I really mean “viewed” for any length of time. I felt this was a good place to begin, since there isn’t much of a limit as to what one can track if the technology is implemented correctly.

To attempt to record this information, I chose to add questions and interactive H5P tools throughout my online pages and have students answer the questions as they worked through. My goal in this was to have a way to look at the statements (starting with “viewed” and ending with “answered” on the final interactive question) and determine how long they were on the page and whether it seemed plausible (based solely on my own opinion) that they actually glanced over the information provided.

Before anyone points out that this method is about as solid as Swiss cheese, let me say that I am acutely aware of all the holes this presents, and know that my main goal in this pet project is to see what I can get, and try to figure out a pattern of baseline behavior for my class. It’s very unscientific and mostly for exploration purposes and by no means am I making any sweeping generalizations that have any major impact. In fact, I’m hoping that by starting here, I’ll be better positioned next time to be more targeted in the way I try to collect information.

That said, I recently had a situation in my class the other week where I wondered if looking back at the data would help me figure out what happened. The way I’ve structured the course, students ideally work through the online content (I’ve been calling it “pre work”) before coming to class. The online content serves to anchor the students’ collective knowledge at a common foundation level and introduce them to what we will be doing in class together (for more about my philosophy behind this decision, check out my other posts about my first year seminar class).

I had put together a page of pre work related to workshopping resumes (which we would be doing in class the following week). It had tips, suggestions, and general protocol for workshopping, as well as host of resources to use to help compare aspects such as formatting and layout.

When I noticed that a significant portion of the students seemed utterly confused during our resume workshop, I suspected the most likely cause was that they did not do the pre work. The questions I got were things like “Are there any examples we can look at?” (yes, in the pre work there were samples for download), “How should this be formatted?” (again, all included in the pre work) and “I didn’t know we had to bring our resume to class!” (also noted in the prework). I wouldn’t call myself a seasoned teacher by any means, but having spent time in the high school setting, I knew these questions were all indicative of one problem: no one read the directions.

Luckily, I was able to check my hypothesis because of the xAPI statements that are being collected in the online space.

Sure enough, when I logged in, the statements were proof. I was able to filter out my data to the page that I was targeting and provide a range of dates between which it would have been acceptable for my students to access that page in time for class.

When I exported the list and eliminated staff members who were exploring my content (it is open to anyone within the PSU system), I was left with a list of 12 students who had accessed the page before the day of the workshop. Now, I realize that this doesn’t prove 100% that not reading the pre work caused confusion, however, the students who did log in and at least left their computers on the page for a period of time (because again, I can’t prove that by just having the page on the screen a student is actually reading) were the ones who had a good handle on what we were doing in class.

I also can’t say for certain that the root cause of the problem was that they chose not to read. Perhaps some of them didn’t realize what was expected, or maybe they thought the read the right thing but didn’t… the list of excuses goes on.

But based on this information, I was able to modify my practice and provide more direct instructions for the students for the next week’s pre work. I put the link to the content in a more prominent place, and made sure to stress its importance both in class and in my weekly email to the students.

Having done that, I am interested to see what the comparison is like for the following week’s content.

In summary, I guess this instance doesn’t prove the value of xAPI as the great answer to our data analytics woes, or at least not quite yet. However, it was definitely handy to have a way to know which students had at least taken the time to click on the link and answer the questions on the workshop protocol page, and be able to surmise from that information that the majority of the class didn’t get the directions ahead of time as I had planned and hoped they would. For now, that’s enough to satisfy me. As I get more familiar with what I’m collecting and the types of questions I have about my students’ online behavior, I’ll be able to be more direct about my xAPI data gathering.

--

--