Knerdlings: Improving Courses through Simulation

Knewton
Knerd
Published in
5 min readDec 8, 2011

Written by Ashley Miller, Christopher Tang, David Kuntz, George Davis, Jesse St. Charles, John Davies and Sidharth Gupta

No two students are identical — they learn and forget at different rates, come from different educational backgrounds, and have different intellectual capabilities, attention spans, and modes of learning. As a result, designing a real-time recommendation engine that is sensitive to the idiosyncrasies of each student is an immensely complex task.

Here at Knewton we are addressing this challenge head-on, using educational path planning technologies and advanced models of student ability. These technologies and models ensure that, no matter how different students are when they enter our system, each of them progresses through the course material in a way that maximizes his or her individual rate of learning. In our ongoing research and experimentation to improve these student models, our latest and most diabolical step has been to create our very own army of virtual abecedarians (Knerdlings), who assist in the testing of our recommendation engine.

Testing Our Recommendation Engine

One good way to evaluate the courses that make use of our recommendation engine is of course to collect and analyze real data generated by real students. These analyses are critical to maintaining and improving the quality of our courses for future students, but real students take real time to complete a course. So, to enable ourselves to research and develop in the interim, we needed a flexible and automated way of generating student data. With this goal in mind, we engineered a virtual student named “Isaak” and used him as the prototype for a whole army of Knerdlings.

In addition to meeting the goal of generating usable data, there are numerous advantages that this strategy offers us. First, it gives us an efficient way to simulate taking our course over and over (and over and over) again. Second, we gain fine-grained control over the many possible kinds of students who could use our course in the future. Next, we can induce particular behaviors in students, and simulate a broad range of student abilities and learning styles. Last but not least, we can be as realistic as we need to be, and nuanced in the exact ways we expect a real-world student would actually be. With these thoughts in mind, we designed the inner workings of Isaak’s mind.

Modeling Student Behavior

The core of Isaak’s mind is a graphical (that’s “graph-based,” not “made of pretty pictures”) model that keeps track of the current state of Isaak’s latent abilities — like his proficiency in “Solving 30–60–90 triangle problems,” or in “Understanding trigonometry”. If Isaak were a real student in a Knewton-powered course, he would be exposed to instructional content and take assessments, and his performance on these assessments would play a role in determining his path through the course. Each student’s latent abilities determine how well the student performs on any assessment of a given topic or concept — in technical parlance, latent ability contributes to a “generative model of performance”. To model student behavior more accurately, Isaak’s latent ability levels also need to improve over time as he is “exposed” to more material, and to degrade without proper reinforcement.

In short, Isaak needs to be able to learn, and (regrettably) needs to be able to forget. Inspired by research that started with Hermann Ebbinghaus’s work on memory retention and learning curves, we model the changes in Isaak’s latent ability, while learning and forgetting, using exponential growth and decay curves. Isaak doesn’t actually do any learning (or forgetting), of course — he just receives a “bump” in his virtual ability level for a topic each time we expose him to content associated with that topic. (The precise nature of that “bump” depends on the functional form of the learning curve.) Likewise, if Isaak is not exposed to some other topic, he might forget that topic over time. The forgetting curve itself that governs Isaak’s rate of retention is roughly described by the following formula:

, where R is memory retention, S is the relative strength of memory, and t is time.

By integrating this process into the simulation, we can capture the way in which a student’s knowledge waxes and wanes, depending on how and when they are exposed to various content. The resulting simulation isn’t a perfect recreation of real student behavior, but, in addition to being really cool, the corps of Knerdlings generated in the simulation lets us test the algorithms that govern a student’s flow through the course quite stringently, while maintaining enough simplicity for us to understand the results.

Visualizing the Data

Once we were ready to set our army of Knerdlings loose in our courses, we needed to make sense of the copious amount of data they were about to generate. We did this by summarizing the results in a visualization that lets us understand the behavior of our virtual students graphically (this time, that’s “graph-based” and “in a pretty picture”). The visualization lets us watch latent ability levels change over time, as individual Knerdlings interact with a Knewton course, in a way that’s much easier to grasp than, say, a spreadsheet containing the exact values of a student’s ability on every concept for each hour that they were enrolled. We have nothing against spreadsheets here at Knewton — they have their place — but problems and strange behavior jump out much more clearly to us visually.

Creating Better Algorithms

The visualization tool will also have applications outside of Knewton HQ. In the hands of an instructor, for example, it could help track the progress of students, and allow the instructor to adjust her course plan to account for her students’ likely learning states. Say that a student has passed a sizable number of the units in the course, at a point well into the semester. The student has most likely forgotten some of the topics that were covered earlier in the semester and not revisited since. Using this visualization, along with the Ebbinghaus-based model of learning and forgetting, the instructor now can see at a glance what parts of the course the student is most likely to have forgotten — even what parts the student is likely to forget in the future — so that she can remediate her student appropriately.

So far, this graphical tool and its associated models are only operational internally, but those models are already helping us create better algorithms, which in turn are going to start moving students through Knewton-powered courses more efficiently than ever before, during this next academic year.

--

--