UW MSR Summer Institute on Crowdsourcing Personalized Online Education

Amy J. Ko
Bits and Behavior
Published in
5 min readJul 20, 2012

For the past three days I’ve been at the 2012 UW MSR Summer Institute, which is an annual retreat on an emerging research topic. This year’s topic was “Crowdsourcing Personalized Online Education”. What this really meant in practice was two things: what is the future of education and how can we leverage the connectedness of observability of learning online? The workshop was mainly talks, but there were an impressive number of great speakers and attendees that kept everyone engaged.

There are a lot of important things that I observed out of all of these discussions and talks:

  • The first thing that was apparent is just how different the motives and values are in the different communities that attended. The majority of the attendees were coming from a computing perspective, with primary interests in creating new, more powerful, and more effective learning technologies. There were a smaller number of learning scientists, with interests in explaining learning and devising better measurements of learning, much more rigorously than any of the computing folks had done. Two representations from the Gates Foundation also came briefly, and it was clear that their primary interests were much less in specific technologies and much more in creating educational infrastructure and new, sustainable markets of educational technologies. There were also representatives from Khan Academy and Coursera, who were broadly interested in providing access to content, and mechanisms to enable experts to share content. My view on what’s really new behind all of this press on online learning is that computing researchers are newly interested in learning and education: almost everything else, except for the scale of access, has been done in online learning before.
  • Jennifer Widom, Andrew Ng, and Scott Klemmer (all at Stanford), talked about their experiences creating MOOCs for their courses. The key take away message is that it is very time consuming to create the course, with each spending countless hours recording high quality lectures before the hours, negotiating rights for copyrighted material, and working out bugs in the platform. All of them implied that running the course the first time was more than a full time job. On the other hand, many were confident it would take much less time for later offerings and had confidence that most aspects of the class can scale to be arbitrarily large (even design critiques, in Scott Klemmer’s case, through calibrated peer assessment). The one part that doesn’t scale is student-specific concerns (for example, students getting injured and needing an extension on an assignment). Scott also suggested that every order of magnitude increase in the number of students demands an order of magnitude increase in the perfection of the materials (because there are so many more eyes on the material), but again, this is a decaying cost, assuming the materials don’t change frequently.
  • In many of the conversations I had around how MOOCs might change education, many faculty believed that the sheer availability and accessibility of instructional content would shift the responsibilities of instructors. Today, most individual instructors are responsible for making their own materials, making them accessible, and then using them to teach. In a world where great materials are available for free, these first two responsibilities disappear. The new job of a higher ed instructor may therefore much less about designing materials and providing access to them, but correcting misconceptions, motivating students, designing good measurements, and building learning communities. One could argue that this is an overall improvement (and also that it actually mirrors the way that textbooks work, which are written by a small number of experts and used as is by instructors).
  • Interestingly, most of the MOOC teachers reported that the social experience of students online were critical, including forum conversations, ad hoc study groups in different cities around the world, and peer assessments. This might quell a lot of the concerns that higher ed teachers had about the loss of interaction in online — it might just be that the interaction shifts from instructor/student interaction to student/student interaction and student/intelligent tutor interaction. Some of the preliminary data suggests that students actually greatly prefer this, since they don’t get that much instructor interaction already, but they’re getting much more student/student interaction than in a traditional co-located course. This might therefore be an improvement over traditional lecture-based classes, but not classes in which teachers interact closely with students (such as small studio courses).
  • No one knows what will happen to the education market, including the people running Kahn and Coursera. However, there were some predictions. First, these platforms are going to make it so easy to share and access content, in the same way that the web has for everything else, that finding and choosing content is going to become a critical challenge for students. Therefore, one new role that instructors might play is in selecting and curating content in a way that is coherent and personalized for the populations that they teach.
  • Most of the interests related to crowdsourcing are either in (1) enabling classes to be taught at scale (by finding ways to free instructors and TAs to have to grade and assess all of the work), (2) to improve the effectiveness, efficiency, and or engagement of learning activities, or (3) to create new opportunities through informal learning, such as through oDesk or Duolingo. Researchers are thinking about how to use data to optimize the sequence of instruction, give just the right hints to correct misconceptions, select a task that is challenging but not too challenging. In my view, this is leading to a renewed interest intelligent tutoring systems.
  • As usual, most of this new research work suffers from a lack of grounding in and leveraging of prior literature in learning sciences and intelligent tutoring systems. There is tons of research on all of these challenges that computing researchers are tackling, but I don’t seem them really using any of the work. This happens over and over in computing research, since the interests are often in creating new things and not understanding the things themselves. I was impressed, however, how much Andrew Ng had leveraged findings in learning sciences to support certain design decisions in Coursera.
  • There was a big undercurrent of data science at the workshop. Everyone was excited about big data sets and how they might be leveraged to improve learning technologies. Most of the methods reported were fairly primitive (AB testing, retention rates), but I’m hoping this new energy behind learning will lead to much better methods and tools for doing educational data mining.

Phew! Sorry for the lack of coherence here. We covered a lot of ground in 2.5 days and this is just a sliver of it.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.