The invisibility of prior knowledge

Amy J. Ko
Bits and Behavior
Published in
4 min readSep 28, 2016

When you watch an Olympic sprinter run 50 yards in 5 seconds, what’s your first thought?

  1. That must have taken an incredible amount of practice, or
  2. Wow, that is some incredible DNA.

Now, we know both nature and nurture matter. But in watching sprinters, we see nurture matter because we can see sprinters practice. Olympics broadcasts show us hours of practice. We see their coach. We know that Nike has sponsored their thousands and thousands of hours and hundreds of pairs of shoes. We know that as much as someone might be born with a genetic head start, the only way to really get to the top is to practice more and better than anyone else in the world.

And yet, for other kinds of human ability, few people, if ever, consider the role of practice, instead attributing ability to genetics. This is especially true in software. People assume Bill Gates must have been a genius. The news frames Mark Zuckerburg like a boy prodigy. Hacker culture of the 90’s, and to a large extent still today, divides people up into “real” coders and posers, treating computing as if it is something natural, innate, inborn, and gifted to a privileged few.

The reality, of course, is that the majority of variation in ability in computing and every other field is due to practice, not genetics. As K. Anders Ericsson studied for years, most of the variation in expert performance is explained by how well and how much people practice a skill. Coding (clearly) isn’t something people are born knowing how to do, nor is it likely something people are born with a predisposition for. It is something people learn, and it is our experiences, our other skills, and our environment that develop and sustain the motivation to learn, and likely are predispositions to learn things.

Why do people gravitate so easy to theories of ability grounded in genetics rather than practice? I think it’s because practice, and in particular, the prior knowledge that practice produces, is invisible. When you meet someone, you can’t see what they know, how well they know it, how many years they’ve been practicing it, or how well they’ve been practicing it. In fact, even when scientists like myself try really, really hard to measure what someone knows, we struggle immensely to reliably and accurately capture ability. It’s really only in a narrow set of domains, like sports, where we’ve created elaborate systems of structured measurement in order to quantify ability.

This invisibility of prior knowledge, and the attribution of ability to innate qualities rather than practiced skill, has many consequences throughout software engineering and computing education. When a company tries to hire someone, they have very weak measurements of what an engineer knows, and have to rely on pretty pathetic coding tests that likely correlate with little of actual skill. Worse yet, when a CS1 teacher gets a classroom of new students, they often attribute success in the class not to the quality of the practice they have provided to students, or to the vast differences in practice that students engaged in prior to class, but instead, divide students up into those who “get it” and those who don’t. Because hiring managers and teachers can’t see into each person’s mind, they can’t comprehend the vast complexity of prior knowledge that shapes each individual’s behavior.

Because of these challenges, measuring knowledge of computing really is one of the most pressing and important endeavors of computing education research. We need robust, precise instruments that tell us what someone knows and can do. We need the decathlon of coding, helping us observe ability across a range of skills, each event finely tuned to reveal the practice that lurks beneath someone’s cognition. Only then will we have any clue how to support and develop skills in computing and know that we’re doing it successfully.

So far, the closest thing to this in computing education research are the series of language independent assessments of CS1 program tracing skills that have come out of Mark Guzdial’s lab. These are great progress, but we need so much more. My former Ph.D. student Paul Li did a study of software engineering expertise, finding dozens of attributes of engineering skill, none of which we know how to adequately measure. Just as lenses revolutionized astronomy and biology, we need instruments that allow us to peer into people’s computational thinking to revolutionize the learning of computing.

Ready to help? Come do a Ph.D. with me or the dozens of other faculty in the world trying to see invisible things in people’s heads. Let’s figure out how to transform humanity’s understanding and utilization of computing by seeing what they know about it.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.