# Mastery learning and creative tasks

Our most interesting and pressing problems require people to apply their understandings creatively in unfamiliar contexts.

Maybe we have some sense of what it means to “master” a specific skill — like “classifying triangles by type.” But what does it mean to “master” the skill of *creatively manipulating understandings in novel situations?*

**Depth of knowledge and transfer demand**

At Khan Academy, we hope to help students build real understanding around what they’re learning. We’ll have failed if learners only feel confident applying concepts in routine situations; instead, they should be able to use what they’ve learned flexibly and fluently, adapting and connecting their ideas in novel settings.

As we develop activities for students on their path to thorough understanding, we want to create a smooth ramp for students: knowing simple facts, applying routine procedures, multi-step reasoning, extended inquiry. Instructional designers often find it useful to label tasks to make sure they’re appearing at a reasonable spot in that ramp. We often use two related axes for those labels: *depth of knowledge*, and *transfer demand*.

When students have to strategize or use creative, extended thinking to tackle a task, we say that the task requires high **depth of knowledge**. When students have to combine familiar facts, procedures, and concepts in unfamiliar ways or situations, we say that the task has far **transfer demand**.

The magic in these tasks *comes from their unfamiliarity*. If there were a lesson about “how to solve problems where the mass of an oscillator suddenly changes,” then this task would no longer demand deep knowledge or any transfer: it would become routine application of a memorized procedure.

But we can imagine shades of gray, too. What if we gave the student a set of three related far-transfer problems like the one above, each with the parameters slightly permuted? The first problem would definitely demand far transfer — but imagine the student couldn’t figure it out. So we show them a worked solution or give them some hints to help them see how the concepts combine in this new context. Now, once we’ve done that, if we show the student a second problem (identical except for some permuted parameters), they can just find-and-replace values appropriately in the worked solution we showed them. That’s much more like applying a routine procedure. It no longer requires far transfer.

Or consider a problem like this:

If students had never seen a problem like this before, the task would certainly require extended reasoning and demand far transfer of several prior ideas from geometry. But in fact, this exercise immediately follows a lesson on Khan Academy which explicitly teaches the procedure students can use to solve for the unknown side of a triangle for which two sides and an angle are known: the “Law of Cosines.” After watching that video, students only have to execute a routine procedure to solve this problem, so we’d say it doesn’t demand any transfer at all.

All this is to say: the same problem could function successfully at a variety of depths of knowledge. Is the problem a challenging puzzle or a rote plug-and-chug question? It all depends on how the student has “*chunked”* their prior understanding.

**Chunking**

The notion of “**chunks”** in learning originates with The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, in which George Miller introduces the term to describe *the level of granularity which is perceived as a single unit*. Two people might “chunk” information at different levels! For instance, a toddler sees words in terms of individual letters, while adults typically read words or groups of words at a glance. A toddler grasps the number “5” by counting up, one-by-one, on their fingers; eventually, with experience, “5” becomes its own “chunk” with automatically retrievable properties.

Let’s return to our triangle problem for a moment.

Students who have never seen this type of problem before must creatively combine a few chunks they might already have internalized, e.g.:

- It’s possible to turn any triangle into two right triangles by drawing a perpendicular line from one edge (say,
*BC*) to a vertex (say,*A*). - Triangles’ vertex angles sum to 180°.
- The cosine of a right triangle’s vertex angle is equal to the ratio of adjacent and hypotenuse edge lengths.
- Various algebraic facts necessary to manipulate the equations in #2 and #3.

Speaking informally and intuitively, a student is likely to find a problem increasingly difficult as the number of chunks they must creatively manipulate increases. Happily, after solving a few problems like this, the unknown-side-length-finding procedure becomes a “chunk,” a routine to be applied to triangles with unknown sides. That transformation is valuable because then this new chunk can be used as a single unit in a more complex problem, like this one:

This problem demands that a student additionally juggle several extra ideas, like properties of parallelograms. To a student who hasn’t already “chunked” the routine for finding a triangle’s unknown edge, this problem might appear totally impenetrable; a similar student who’s chunked that routine might see how to get started.

**Mastery learning**

Introduced by Benjamin Bloom in 1968, “**mastery learning**” suggests that instead of moving through a curriculum with a fixed pace (necessarily with a varying degree of understanding in individual students), classes should support students in moving at whatever pace allows them to thoroughly grasp the material before them.

Mastery learning usually requires formative assessment: students need feedback on their understanding to see where they are, where they’re going, and *how* they’re going. To offer that feedback, learning environments like Khan Academy typically offer students exercises with instant answer-checking. A student might try their hand at a skill like “finding missing angles in a triangle,” then receive opportunity for further instruction or remediation if they struggle — or else move on to another skill if they succeed.

One big implication of mastery learning is that students should have as much opportunity to practice a skill as they’d like. Unlike a class that moves at a fixed pace, a struggling student should always be able to revisit prerequisites, read an alternative explanation, and try some new challenges. These systems usually consider a student to have finally “mastered” a skill when they can consistently answer related problems over an extended period of time.

This sense of mastery necessarily requires repetition: repetition to *remediate* and repetition to *prove*. That’s fine for a routine skill like “classifying triangles by type.” On Khan Academy, we build one exercise for each skill, and each exercise contains around twenty problems. In this case, it’s straightforward to make many problems for this skill: each one presents a different triangle and asks students to classify it. Students are given several problems and must answer most correctly; if they struggle, they can always repeat the exercise and receive different problems.

Critically, the twenty problems within the exercise are *interchangeable*. They’re all asking roughly the same thing. This way, if the student needs more opportunities to practice the skill, they have equivalent material to work with.

But as we’ve seen, repetition *changes the nature* of tasks with high depth of knowledge or far transfer demand. Those tasks *draw their meaning from their novelty* because their purpose is to assess whether a student can creatively synthesize their understandings in an unfamiliar situation. And if a student struggles with a high-transfer task, we can’t direct them to a video which explains how to solve “problems like these,” because these problems are defined by their inability to be routinized in that way.

So: what does it mean to build a “mastery learning” system which accommodates tasks with high depth of knowledge or far transfer demand?

# A mastery system for creative tasks

Such a system must:

- Offer enough material for struggling students to “try again” meaningfully
- Give students a clear sense of their progress

Such tasks no longer correspond to a specific skill; instead they’re more typically “synthesis” tasks which require students to combine a variety of concepts. In some sense, the tasks assess the degree to which the student has “chunked” some of the constituent ideas. Of course, they’re also assessing things like a student’s confidence in the face of unfamiliarity, tactics they might have for decomposing a larger problem, and so on.

To put it another way, if a student struggles with a high-transfer problem, what might we suggest? They should read a worked solution — sure. But then what? Some students might be struggling because they haven’t quite chunked an underlying concept. In that case, more challenging practice, zeroing in on the constituent concepts might help. Still other students might be struggling because they’re intimidated by open-ended problems. Those students might benefit from practice with open-ended problems involving quite elementary concepts.

Then, if a student wants to “try again”—what does that mean? We could offer them another high-transfer problem with the “same difficulty,” but to be truly equivalent, the task must be different enough to involve a distinct creative insight. What might it mean for multiple problems of this kind to be of the “same difficulty”?

We could use some kind of item response theory to estimate a difficulty parameter, but with what model? A univariate model is pretty clearly wrong, yet it’s not clear how many underlying cognitive dimensions are involved in these problems. Maybe one parameter representing a “chunk” for each constituent concept, then one parameter for “various meta-cognitive strategies and attributes for creative problems”? Hard to say.

Perhaps a more pragmatic approach might be something along the lines of:

- Create a huge bank of these problems, involving a large number of concepts at varying levels of the curriculum.
- Intermittently ask students to solve a small set of these problems concerning concepts they’ve already putatively mastered; their “score” is the number they get correct out of the set.
- The student’s “mastery goal” is to slowly increase the fractions of problems in the set they get correct — even though that might start at 0.

I’m not terribly convinced by this technique. Can you think of a better approach, dear reader? Some prior art in this space? Your author would love to hear!