An illustration of a Terminator robot with a thought bubble that says “IF OBJ.IS_HUMAN DESTROY OBJ
Is the Terminator is coming for your curriculum?

The half life of computing education

Amy J. Ko
Bits and Behavior

--

In case it hasn’t been obvious, I’m really interested in computing education right now. There are so many things I like about: a growing and inspired researcher and practitioner community, an abundance of hard, unanswered questions, and—as someone who identifies fundamentally as a human-computer interaction researcher—phenomena rich with people interacting with computers. I’m all in!

However, as some people know, I’m also quite fond of being contrarian, taking something widely accepted or cherished and questioning it as deeply as I can. This includes my own ideas and passions. I do this not necessarily because I dislike something, but more often because I’m deeply interested in it: I find joy in questioning the premise of things about which I’m curious, to find deeper truths about them.

In that spirit, I’ve been wondering lately, does computing education, as a phenomena, a practice, and a research area, have a half life, and if so, what is it? Many computer scientists ask me variants of this question. Here are some of the arguments I’ve encountered from my colleagues:

  • If computer scientists work hard enough, will computing education be necessary anymore, at least at scale? After all, the vision of machine learning, artificial intelligence, and related techniques, is that many—most?—of the algorithmic procedures we write in the world will not be hand-crafted by programmers, but learned from data, and programming will become obsolete. If this future occurs, will we need people to learn computing, at least the parts of it that involve learning how to construct software? Or will we have solutions most routine problems, or at least have hoards of data curators, and no longer need developers solve them? The only need for education left will be for those who want to be a maintainer of the singularity (if it even has one).
  • When teachers are successful at teaching computing, how long does that knowledge persist, both in students’ minds, but also in its practical relevance? After all, knowledge decays, transfer is rare, and so much about computing—programming languages, tools, infrastructure—changes quickly. What is the point in teaching all of these ideas in computing when the ideas become outdated so quickly, and must inevitably be replaced by something more modern?
  • Perhaps the foundations of computing do not change so frequently, and most of the important things to learn become static, allowing computing education researchers to “catch up” to the half life of computing. In this future, computing education researchers will answer all of the questions of practical relevance in teaching, and become a stagnant discipline-based education research area that retreads the same ground until people realize its contributing little of relevance and stop investing in it.

Obviously, even if any of these futures come true, it won’t be for quite some time, since we have so few robust answers to even the most basic of computing education questions. But as I think about the next 30 years of my career, I do wonder: if they do happen, where should the next 30 years take us? And more importantly, how should computing education be shaping computing more broadly, to steer its future in a way that serves humanity?

There are a few rebuttals I can see to these futures. To the singularity argument, I generally react with some skepticism that AI will ever be good enough to write programs itself. It’s certainly better than it was forty years ago, and getting better faster than ever, but all of the techniques I’ve seen still rely on human input to specify desired program behavior, either through examples or formal specifications. All the AI doing is filling in templates carefully authored by people. This points to a future in which programming languages just keep getting higher and higher level, making computing education more challenging, because creating software at ever higher levels of abstraction requires ever higher levels of abstract reasoning. This will only increase the demand for computing education research, to understand how to teach these more abstract skills. But even if the singularity comes, computing education will still be necessary: we’ll all have to know a hell of a lot about how the singularity works to combat Skynet effectively, lest our species perish :) It will become the most important literacy.

To the knowledge relevance problem, one rebuttal is the “learning-to-learn” argument. This is the fallback in many discipline-based education research fields whose relevance is questioned, but one that I think is authentically true in computing. After all, no matter how much changes about computing, students will still need discipline-based self-regulated learning skills for cultivating motivation, pursuing resources, and interpreting information related to computing. Without these, how will they find, read, and understand documentation about new programming languages, APIs, and infrastructure, all of which require some knowledge of foundations of computing to understand. How will they learn and teach each other about new inventions in computing? The worst case scenario in this future is that computing education curriculum inverts itself, focusing less on practical skills, and more on foundational concepts and self-regulated learning skills. This might even describe current computing curricula and teaching, in that students have to teach themselves many things we do not teach directly.

To the research stagnation problem, one rebuttal is that teaching computing isn’t just about teaching the foundations of computing, but also about interrogating the interactions between these foundations and an ever shifting social world. I can’t imagine that the role of computing in society, and our shaping of this role as people, will ever stagnate. We will find new ways to appropriate computing to our ends, which will create new opportunities and problems in society, which will necessitate new research and practice on how to engage students in understanding these sociocultural and sociopolitical dynamics. Consider the past ten years as an example: twenty years ago, most of the computing education research community was concerned with teaching loops so youth could get jobs; ten years ago it started being concerned about broadening participation, so that everyone could get jobs; and now its starting to be concerned with ethics, morality, justice, and responsibility, and the very notion that computing education is for jobs. These topics didn’t materialize from pure curiosity, they emerged from the shifting role of computing in daily life.

As with most of these thesis/anti-thesis/synthesis exercises, I end up in a place that draws upon both extremes. Computing will make some ideas obsolete, which will deprecate much of our understanding of CS learning and teaching. There are some things not worth teaching because their relevance will decay too quickly. And computing education research, like all fields of research, as always at risk of becoming too incremental and too insular to make meaningful progress. Our best defense against all of these is to watch computing and the world closely, and focus our attention on the interactions between the two that people most need to understand and shape.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.