The bleeding edge of computing education research: ICER 2017 trip report

Amy J. Ko
Bits and Behavior
Published in
8 min readAug 27, 2017
ICER 2017 attendees at UW Tacoma, around their distinctive round tables used for discussing presentations.

Learning about computing, and learning to code in particular, is deceptively challenging. While modern marketing about these skills likes to talk about them as if they’re a quick path to prosperity, the reality is that many people can’t even the learn the basics. Students continue to drop out of classes, adults fail to complete coding bootcamps, and all but the most persistent land lucrative jobs in top tech companies.

This is where computing education research comes in. It’s the science of learning computing, including all of the social science necessary to understand how people’s motivation and identity influences learning, all of the learning science necessary to understand how people acquire computing skills, and all of the computer science necessary to support this learning.

Last weekend was one of the annual gathering of scientists who study these topics: the ACM International Computing Education Research conference (ICER). It’s the premier academic conference on the science of learning and teaching computing. This means that some of the most rigorous work is published at ICER, including some of the biggest advances in the field.

The academic community at the conference has grown considerably since I first started attending five years ago. In 2011, it was no more than 70 researchers and two days of content, with only a few doctoral students. Just six years later, it’s more than 150 researchers, with dozens of doctoral students.

One of my favorite aspects of the community is how lively, inclusive, and encouraging the group is. Every single new attendee giving a talk gets a welcome and applause; roundtable discussions after each talk engage newcomers in dialogue about the work.

Since the conference was relatively local (a 30–60 minute drive south of Seattle), I brought a big crew, most of whom were first time attendees. My doctoral students Greg Nelson (CSE), Dastyni Loksa (iSchool), Benji Xie (iSchool), and Kyle Thayer (CSE) attended, along with two of my summer undergraduate research assistants, Harrison Kwik (CSE) and Alex Tan (iSchool). There were also several other UW doctoral students in attendance, including Ada Kim (iSchool), Nanchen Chen (HCDE), and students from the College of Education.

There were 29 papers published this year, three of which were from my lab: I published one on computing mentorship of adolescents, Greg and Benji published on an exciting new programming language tutor, and Kyle published on coding bootcamps).

Research at the conference is still has a strong focus on computer science education in colleges and universities. Several papers investigated aspects of student experience in higher education classrooms. For example, one study found that student frustration correlates with performance in class and creates feedback loops that harm future performance (Leshinski et al. 2017). Another found that when college students are asked to describe programming, they convey very narrow classroom-centric conceptions, with little sign of a developing identity as programmers or software developers (Moskal et al. 2017). These help us understand more deeply the experiences of students learning computing in college, and how those experiences shape their learning.

Other studies related to classrooms focused on how to use student performance data to help identify students who are struggling. For example, one study found that a small sample of wrong answers covers most of the wrong answers that students give (Stephens-Martinez et al. 2017), suggesting even small data sets can help us understand most of the struggles students are having understanding something. Another study found that static analysis warnings can be good predictors of when students are struggling (Edwards et al. 2017), suggesting that advances in software engineering can be both tools to learn in class, but also good predictors of struggle. Another paper found that students starting earlier and testing earlier is predictive of students’ program correctness (Kazerouni et al. 2017). Another still found that to predict student performance, gathering multiple unique data sources is key, since metrics from the same data source tend to correlate highly with one another, providing little additional predictive power (Leinonen et al. 2017). Each of these papers are building a robust body of evidence of factors that are correlated with learning outcomes, both to help us understand student learning, but also to help teachers intervene more effectively.

Many papers focused on the learning of programming and programming languages, independent of context. For example, my students Greg and Benji presented a paper on a new theory of programming language knowledge, arguing that programming language knowledge is about understanding semantics, and causal inference on a program visualization that displays semantics is a rapid way to learn them (Nelson et al. 2017). Greg demoed his new tool, PLTutor, which reifies this theory, and can teach a programming language in just a few hours. Others studied program comprehension, finding that novices have a hard time deciding which schema to apply to comprehending program purpose and evaluating when to switch (Fisler et al. 2017), and that when when novices explain examples to themselves and review them before solving similar problems, they do better than reviewing expert explanations (Margulieux and Catrambone 2017). Another study found that when students sketch to trace program execution completely and systematically, they trace it more correctly (Cunningham et al. 2017). All of these studies aim to inform how to better teach programming and programming languages.

Tools to support learning were a common topic as well. One study found that improved compiler errors don’t have much effect on learning outcomes, even when learners read them closely (Prather et al. 2017). Another study found that theorem provers like Coq can help students better learn how to write formal proofs (Knobelsdorf et al. 2017). And always a popular topic, another paper presented a game designed to teach explicit debugging strategies (Miljanovic & Bradbury 2017). Another fascinating study found that students have very different expectations of computer-based tutors than they do human tutors, viewing computer-based tutors as more accessible but less trustworthy, which shapes their help-seeking behavior (Price et al. 2017). Another study discovered that how we visually represent diagrams of architectural state mediates what conceptions students form (Herman and Choi 2017). Yet another found that there is no clear pattern in the effect of the use of text, audio, or combined text and audio on learning (Morrison 2017). These papers reinforce the importance of tools and representations in shaping learning outcomes.

With the recent global efforts to incorporate computing into K-12 education, there were many studies about sociocultural factors behind these efforts. A team at Google found in their nationwide survey of parents and teens that adolescents in the U.S. have widely varying levels of interest, perceived culture fit, and perceived ability along gender, race, and their intersection (Wang et al. 2017). I presented a paper that found that interest in learning computing is strongly related to having an informal computing mentor, and that these mentors can be parents, teachers, friends, and even siblings (Ko and Davis 2017). Two other papers studied the social experiences that students have while learning at school, one finding that there are three types of social interaction in primary school collaborative CS learning: general socialization, excitement and accomplishment, and creative problem solving (Israel et al. 2017), and the other finding that students struggle to effectively negotiate in pair programming in high school (Deitrick et al. 2017). These studies reinforce the importance of understanding the social context of learning computer science.

Some of the research focused on teacher experiences. One paper found that teachers and students use eBooks very differently to learn computer science, suggesting the need to personalize materials (Parker et al. 2017). One national study in Italy found that teachers in Italy have little agreement on what computational thinking means (Corradini et al. 2018). A similar kind of study found that computing education researchers believe active learning is good, but frequently misuse the phrase “active learning,” implying all forms of active learning are equally good (Sanders et al. 2017). Perhaps the only study to investigate teacher professional development directly found that professional development is an emotional rollercoaster driven by the challenges of reconciling their new knowledge of computer science with the constraints of classrooms and time (Reding and Dorn 2017).

The conference banquet, at the Tacoma Art Museum, emceed by Donald Chinn

A few studies focused on research “infrastructure” building. The winner of one of the best paper awards built a map of learning trajectories for primary school computer science learning, focused on sequencing, repetition, and conditionals (Rich et al. 2017). This is the kind of trajectory that will shape and inform curricular studies in primary skills. Two other papers designed validated instruments for measuring computational thinking practices for use in high school (Snow et al. 2017) and for measuring self-efficacy for introductory algorithms courses (Danielsiek et al. 2017).

While the vast majority of papers at the conference still focus on learning that happens in educational institutions, two papers investigated new contexts of learning. My student Kyle Thayer reported on his investigation of coding bootcamps, finding that they replicate many of the diversity barriers of higher education classrooms, but provide a high risk, high cost second chance for diverse learners (Thayer and Ko 2017). Jeremy Warner reported on a study of hackathons, finding them to be compelling sites of informal peer learning, but impose significant diversity barriers to participation (Warner and Guo 2017).

Peter Norvig, Director of Research at Google, facilitating research question brainstorming at the workshop.

Beyond the conference, Greg, Benji, and I also attended a workshop organized by Ben Shapiro (Colorado) on the topic of learning machine learning. While we know an increasing amount about learning programming, we know almost nothing about how to teach and learn machine learning. This is a big problem: lots of companies are staking their future on machine intelligence, but very few engineers know much about it. I blogged more extensively on the problems this may cause in the coming decades.

Considering this year’s proceedings more critically, many of my students who attended for the first time wondered where the grand theories of computing education were. While many papers used theory superficially, it was hard to see how all of this research is building to a greater understanding of how to best learn and teach computing. I’m obviously biased, but I think that the paper that made the most progress in this regard was Greg Nelson’s work on PLTutor from our lab: not only did he show significant advances in learning outcomes, but he presented a unifying theory for why that others can test, replicate, falsify, etc.

Personally, I think theory building is the greatest barrier to the field having impact over the coming decade. We need to be thinking beyond one paper at a time, building a stronger scientific dialog over the course of years. Once we do this, we’ll have many more practical things to say about how teachers should teach, how learning technologies should work, and how learners need to learn. After all, as Kurt Lewin, pioneer of social and organizational psychology once said, “There’s nothing more practical than a good theory.”

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.