Improving Learning Outcomes through Spaced Repetition

Miruna Cristus
Sana Labs
Published in
5 min readJul 1, 2020

Often in learning, the challenge of mastering a topic is not strictly related to the difficulty of the topic, but also to the number of concepts one has to retain and connect. Traditional textbooks and linear courses often lack the necessary focus of reviewing previous concepts and therefore do not optimize for long-term knowledge retention. In the context of months-long preparation for high stake exams, unconsciously forgetting important topics is a threat to exam success.

The concept is simple: memorization is more efficient if you space out shorter review sessions over time, rather than cramming.

Spaced repetition has been shown to be an effective technique to address these issues. The concept is simple: memorization is more efficient if you space out shorter review sessions over time, rather than cramming. The efficacy of spaced repetition can be further increased with smart selection of content and timing of reviews, factors that have been the subject of wide research. As education grows in the online space, implementing spaced repetition directly into the structure of courses becomes a high-value opportunity to personalize studying and increase long-term learning outcomes.

Chun, B. A., & Heo, H. J. (2018). Example of Ebbinghaus’ forgetting curve and review cycle.

Development of Spaced Repetition Algorithms

One of the most challenging questions in spaced repetition research involves finding the optimal review schedules for an individual. We know it is important for a learner to periodically review concepts, questions, words, etc; but when should they do these reviews? What time intervals are most effective?

Early algorithms focused on second language learning: one developed by Paul Pimsleur in 1967 had fixed schedules for review with exponentially increasing intervals, while the Leitner system from 1972 took into account student’s performance by decreasing the review interval if a mistake was made. State-of-the-art algorithms use a myriad of approaches, from fixed parameters, to optimization methods to maximize recall while minimizing review frequency. A common method is approximating the recall probability from a student’s interaction data and resurfacing the item when it falls under a threshold.

Online- and machine learning: keys to spaced repetition

While spaced repetition has been undeniably proven to be crucial for effective learning, it is a daunting task for an individual student to incorporate it in their studying. Starting in the 1980s, software solutions such as Mnemosyne or Anki would surface user-generated flashcards to the learner according to an estimated review schedule. Such systems are tedious to setup: students or teachers need to spend a significant amount of time preparing their content.

A more effective approach for natively integrating spaced repetition into the learning experience is becoming more feasible — facilitated by 3 aspects:

  1. Availability of online courses
  2. Ease of data collection and analysis
  3. Machine learning research

Integrating spaced repetition is facilitated by 3 aspects: availability of online courses, ease of data collection and analysis, and machine learning research.

Availability of online courses
With the massive shift of education to online platforms, spaced repetition algorithms can be included in the flow of a course.

Ease of data collection and analysis
Online large-scale data collection over populations of students is becoming the norm, enabling improved estimations of model parameters, while details about each student’s progress facilitate higher degrees of personalization. Online learning platforms like Sana allow each student to receive the optimal review recommendations over multiple topics, taking into account their entire learning history.

Machine learning research
The higher efficiency of the review schedule of modern spaced repetition algorithms is now achieved using machine learning methods. Modern approaches often combine an underlying memory model, such as exponential and power-law models, with machine learning techniques such as half-life regression to optimize the review schedule, or to set the schedule as a deep reinforcement learning problem. This has enabled the development of data-driven, effective algorithms, surpassing early fixed-parameter models.

Tabibian et al. (n.d.). Through machine learning methods, the MEMORIZE algorithm optimizes spaced repetition schedules.

Spaced repetition has been proven to consistently be an effective method for more than 100 years. Improved data collection and accessibility, and the progress in the field of machine learning keeps increasing the effectiveness of spaced repetition in many fields of study. As technological developments in digital learning, data analytics, and AI continue to progress, the opportunities to leverage spaced repetition become more available.

If you wish to learn more about how you can implement personalized learning in your organization, please send a note to miruna@sanalabs.com.

References

Anki. (n.d.). https://apps.ankiweb.net/.

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354–380. Retrieved from https://www.gwern.net/docs/www/uweb.cas.usf.edu/f1052ecdd92f0ecc3f57bdd890a4a6558483ec45.pdf.

Chun, B. A., & Heo, H. J. (2018). The effect of flipped learning on academic performance as an innovative method for overcoming Ebbinghaus’ forgetting curve. Proceedings of the 6th International Conference on Information and Education Technology — ICIET ’18. Retrieved from https://www.researchgate.net/publication/324816198_The_effect_of_flipped_learning_on_academic_performance_as_an_innovative_method_for_overcoming_ebbinghaus'_forgetting_curve.

Ebbinghaus, H. (1885). Memory: A Contribution to Experimental Psychology. Teachers College, Columbia University, New York, NY, USA.

Kang, S. H. K. (2016, January 13). Spaced Repetition Promotes Efficient and Effective Learning. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12–19. https://doi.org/10.1177/2372732215624708.

Mnemosyne. (n.d.). https://mnemosyne-proj.org/features.

Settles, B., & Meeder, B. (2016). Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Retrieved from https://www.aclweb.org/anthology/P16-1174.pdf.

Reddy, S., Levine, S., & Dragan, A.D. (2017). Accelerating Human Learning with Deep Reinforcement Learning. Retrieved from https://people.eecs.berkeley.edu/~reddy/files/DRL_Tutor_NIPS17_MT_Workshop.pdf.

Spaced Repetition for Efficient Learning. (2019, May 17). Gwern. Retrieved from https://www.gwern.net/Spaced-repetition.

Spaced Repetition. (n.d.) In Wikipedia. https://en.wikipedia.org/wiki/Spaced_repetition

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., & Gomez-Rodriguez, M. (2019, March 5). Enhancing human learning via spaced repetition optimization. Proceedings of the National Academy of Sciences, 116(10), 3988–3993. https://doi.org/10.1073/pnas.1815156116.

Tabibian, B., Upadhyay, U., De, A., Zarezade, A., Schölkopf, B., & Gomez-Rodriguez, M. (n.d.). Memorize: An Optimal Algorithm for Spaced Repetition. Retrieved from http://learning.mpi-sws.org/memorize/.

--

--