Designing for agency in online learning

Benji Xie
Bits and Behavior
Published in
8 min readAug 12, 2020

It’s important for learners to have control over their online learning experiences. But designing the information to make that happens is nuanced!

Imagine if you told somebody who had never programmed before to “go learn it on the internet.” How would that happen? They might search on the internet “how to code.” And maybe they watch some YouTube videos that provide some instruction but don’t provide practice. Or maybe they go to a site like Codecademy and try to follow along, but get frustrated because they want to explore instead of follow some predefined path. Except for those with immense self-efficacy, metacognition, and time, this experience will almost certainly result in frustration and loss of interest. This challenge of helping learners guide their own online learning experiences is important as more and more people turn to the internet to try to learn programming on their own.

How learners navigate online learning experiences affects how effectively they will learn. Providing learners the agency to make meaningful decisions about their learning can benefit their learning experiences, but making decisions can be hard work! So my lab mates and I explored how affording agency as well as information to inform agency would affect learning.

Informing decisions is a pre-requisite to enabling agency.

First, we identified information that we thought was important to learners making decisions about what to learn next (about how to exert agency). We identified that this information would inform learners about 1) what there was to learn, 2) where they currently were in their learning, and 3) what they could do next. If learning was a hike, this information is similar to providing a map of the area, information about where you have already gone, and recommendations on where you may want to travel next!

We distilled this information into five features shown in the figure below:

Screenshots of part of Codeitz showing visualizing 5 components to inform decision-making
The information we designed to inform learners of what they may want to learn next.
  • The world view was a hierarchy of concepts showing the dependency relationships between concepts (e.g. how it would help learn data types before learning arithmetic operators or variables). We intended this to provide learners a map of what there was to learn.
  • Exercise feedback was intended to provide basic feedback so learners would know whether their attempt at a practice exercise was correct or not.
  • Progress indicators showed learners what they had already done compared to what else there was to do.
  • Skill bars were estimations of learners’ mastery of concepts based on their previous performance on practice exercises. We intended for learners to use this to determine whether they should continue learning about a certain concept or move onto a different one.
  • Recommendations were suggestions of what practice exercises they may want to explore next. We intended for these to provide learners with alternate pathways to their learning (e.g. “try this exercise to review something you already learned” or “try this exercise to challenge yourself!”).

We varied agency and information afforded across 3 versions of Codeitz

Our goal in this study was to understand how varying agency and information to inform decision-making affected learning. To do so, we designed three variations of an online learning platform that we called Codeitz (“Code in the zone”). The figure below describes these variations:

quadrant of Codeitz versions on dimensions of agency afforded (low/high) and adaptive info from system (uninformed/informed)
Our three variations of Codeitz varied the amount of agency afforded and the information afforded to inform agency.
  • The uninformed high-agency (UH) version of Codeitz afforded learners the agency to learn in whatever order they wanted. It provided the world view to show the relationships between concepts, progress indicators to show what they had/had not done, and exercise feedback. This version of Codeitz did NOT include adaptive feedback from the system (skill bars, recommendations).
  • The informed low-agency (IL) version of Codeitz did not afford learners the agency to decide what to learn next. Instead, learners had to defer to the system to select the next concept for them to learn. They could see all the features of Codeitz (except the world view), but the system decided for them what to learn next.
  • The informed high-agency (IH) version of Codeitz afforded the agency to have learners make decisions for themselves and provided all features to inform their decision-making. The IH version of Codeitz was essentially the IL version with skill bars and recommendations. We intended this version to be like a recommender system in that learners could consider system suggestions, but could ultimately make whatever decision they wanted!

The gif below demonstrates the experience of the informed high-agency (IH) version of Codeitz:

Using Codeitz: Select a concept from a hierarchy, select content to read or practice. Interface updates with practice submi.
Demonstration of the Informed High-Agency (IH) version of Codeitz, a recommender system.

All three versions of Codeitz taught the same introductory Python curriculum that assumed no prior programming knowledge. This curriculum was based off my theory of explicitly teaching programming skills which I previously wrote about (see Medium article about that theory).

Learners did not find the adaptive features to be informative

To understand how varying agency and information affected learning, we conducted a study with 79 novice programmers where each participant was randomly selected to spend a week using a version of Codeitz and then take a post-test to measure what they learned. We also surveyed them to understand how different features of Codeitz affected their learning experiences. The figure below shows post-survey results on how learners perceived the importance of different features of Codeitz:

Results of Likert-like data comparing importance of features of Codeitz from participants in 3 conditions of Codeitz.
Importance of different information we provided to learners. In general, learners found the world view/concept hierarchy the most helpful and the adaptive recommendations and skill bars the least helpful. 😮

The world view (concept hierarchy) was important to the high-agency learners that saw it and desired for the informed low-agency (IL) learners that did not have it. This suggests that learners saw benefit in having an overview of what there was to learn. While we intended for this hierarchical relationship to encourage learners to learn in any order they wanted, many learners who saw the world view did not want the burden of the decision. Instead, they wanted to be given a “ideal” path to follow or assumed “top to bottom, left to right” was the intended order to learn concepts.

The progress indicators and exercise feedback were deemed important across conditions, which was expected as this information was intended to be consistent with other online learning experiences.

The adaptive features of the skill bars and recommendations were found to be the least important of the features. While a bit surprising (and ok a bit disappointing), qualitative feedback suggests that this was because these features were not explained thoroughly enough and therefore behaving in unexpected ways. (Lots more about this in our paper, linked at the end).

Learning did NOT vary across conditions?! 😨

We went into this study investigating how varying agency and information would affect learning. However, we did not find evidence that it actually did! As shown in the figure below, we saw a lot of variation in post-test performance within conditions, but no pattern of difference across conditions! This says that we do not have the evidence to say that more or less agency or information is better for learning. Why is that?

Boxplots with data points comparing post-test scores across 3 conditions. There is no detectable difference.
We did not see a difference in post-test performance across conditions

Firstly, there were many other factors that we affected learning. We found that programming self-efficacy, prior programming experience, and amount of Codeitz curriculum completed all affected learning outcomes. That these factors affecting learning was expected.

We also found that a vast majority of learners in the high-agency tended to complete all exercises. We primarily recruited undergraduates that were apparently highly motivated to learn as much as they could! So when most learners across multiple conditions try all the practice, differences in learning outcomes become hard to detect.

Finally, undergraduates may be less familiar with exercising agency. They had likely gotten accustomed to their university education that tended to define learning pathways for them. So being asked to exercise agency and guide their own learning experiences may have been uncomfortable or unusual. So exercising agency likely deviated from learners’ expectations.

Conclusion: Designing information to inform agency is nuanced, but promising.

We interpret our findings in the context of design implications for self-directed online learning environments. While much prior work investigates how agency affects learning (with mixed results trending towards agency benefiting learning), our study looked specifically at how different information affected agency.

We have more questions at answers at this point, but here some key considerations when designing information for agency:

Design implications related to recommendations, the programming domain, and expectations about agency.

Perceptions of adaptive indicators evolve. Trust in recommendations must be earned, and providing more information on what to expect from adaptive indicators can help with that. Furthermore, we found that learners’ perceptions of recommendations evolved as they used Codeitz more. While some learners started ignoring recommendations, others eventually found them beneficial for many reasons (see paper!) as the recommendations were trained on more data and became more accurate/helpful.

Programming is a unique domain. Programming language semantics have pretty rigid dependencies that make it hard to exert some forms of agency. The agency we tried to encourage in this study involved learners jumping around to different concepts as they decided to. But this may not be appropriate for learning programming semantics. For example, a novice trying to learn conditionals with no knowledge of relational operators may struggle unproductively. Perhaps a more effective form of agency is having more predefined paths that learners from which learners can jump ahead or jump back.

Agency may not be the expectation. Exerting agency is a challenging task that requires self-efficacy (believing that you can make decisions to benefit your learning) as well as self-regulation (having awareness of where you are in your learning experience). Furthermore, decision-making can be challenging, especially if learners are used to having learning pathways defined for them in formal learning settings.

Despite this, I am convinced that we can design online learning experiences such that learners can interpret information in different ways, make personally meaningful decisions that benefit their learning, and have ownership over unique yet equitable learning experiences. 🚀

Try Codeitz! Read the paper!

If you are interested in more about the design of Codeitz and rich qualitative findings on how learners interpreted features of Codeitz, I encourage you to read our Learning at Scale paper (linked below). And if you or somebody you know wants to learn Python, give Codeitz a try!

--

--

Benji Xie
Bits and Behavior

I design equitable and critical human-data interactions. Embedded Ethics Fellow, Stanford HAI, Ethics in Society. PhD, UW iSchool. Prev MIT CS, Code.org.