Teaching and Understanding Predictions in K-12 Education

When I tell teachers about Philip Tetlock’s work showing experts’ predictions are as accurate as a dart-throwing chimp, I usually get three reactions — surprise, curiosity, and indifference. Granted I introduce the concept in the context of teaching Google Apps (usually Forms and Spreadsheets) and/or critical thinking so it likely seems a non-sequitur.

In the same class, when I ask teachers to name predictions in the form of computer-based algorithms in their daily lives, hands don’t fly up. It usually takes some examples and modeling to get the gears turning. Completely understandable. Many of us don’t realize the ubiquitous nature of predictions/algorithms, and how they’ve invaded our daily lives. They’re all around us, and unfortunately addicting.

Before studying data science, I really hadn’t considered predictions either. I, like most, innately made them, passively accepted them, and relied on others’ predictions to shape my worldview. My predictions led me to dry sandwiches, longer commutes, and also impacted my family and students.

Philip Tetlock’s book Superforecasting chronicles how average people beat experts’ predictions (with access to classified information) by 70% over four years in a publicly available forecasting tournament called The Good Judgement Project (sign-up, it’s still going). He details the art and science of how these everyday Joe’s and Jane’s do it.

Spending 20 years in education, I have heard and made my fair share of predictions — writing curriculum, training teachers, writing 3 year technology plans, etc. Tetlock’s book articulated dissonance I had, but never could find the words to articulate.

I have sat through many keynotes listening to “experts” or “futurists” predicting what students (and we teachers) need to know. I have seen many grand K-12 education initiatives (based on predictions) come-and-go quickly forgotten. And I have seen the allure of “shiny” technology capture the hearts and minds of teachers only to be literally shelved. Superforecasting connected many of these dots for me. A good friend to whom I recommended the book reported the same thing after reading it.

Admittedly, I was the kid who took stuff apart to see how it worked. Getting it back together always seemed the greatest challenge (hi, mom!). I like to see how the sausage is made — yes, I’ve actually made it (hi, Tom and Dave!). So it comes as little surprise I have been enamored by not only teaching, but how predictions in K-12 education are made, the people who make them, and the impact on students.

A lot of K-12 education is a prediction. We predict what students need to know and their skills, sometimes more than a decade away. We predict how students will perform. We predict their future based on their performance.

We predict curricula, technology, tools, and what resources will have an impact on students. We predict how certain methods of teachers’ evaluations will impact students. Even school budgets are a prediction.

When it comes to student learning, students do get some exposure to prediction. The Depth of Knowledge Wheel and some PARCC standards (examples are here and here) is a sampling.

Side note — if you aren’t a fan of PARCC, your critique is a prediction something else will be better.

If it seems like I am not mincing words, you are right. A prediction is making a claim about the results of something before it happens.

The problem isn’t making predictions, it’s about the accuracy of our predictions. We humans aren’t wired to make great predictions, nor do we really hold the experts accountable for their predictions.

Specificity — in terms of time and nuanced probability — isn’t wired into our brains. Our noggins like to take shortcuts known as heuristics. These shortcuts are between two parts of our thinking: System 1 and 2. System 1 is the quick, survival part of our brain. System 2 is the deep-thinking part of our brain. Because survival is metaphorically “eat-or-be-eaten”, System 1 assigns a quick probability — 0, 50, or 100 percent of something happening. Our brains like to conserve resources so these vague probabilities get assigned to many predictions. An example of a common heuristic in K-12 education is: “Will students will be engaged with technology?”. It’s a shortcut to asking, “Will students will be learning with technology?”

The reality though is there are 98 other values besides 0, 50, and 100. In other words, there is nuance besides “nope” (0 percent), “maybe” (50 percent), and “yup” (100 percent). To see this nuanced probability, roll a dice or pick a card from a deck. It’s not just two- or three-dial thinking as Tetlock describes (Daniel Kahneman is the researcher behind System 1 and 2 thinking).

Time — depending how you look at this — is on your side. Predictions are about knowing an outcome in the future. But the generic future is grossly vague. It can be a nanosecond from now until infinity. It is hard to assess the accuracy of a prediction without a set and realistic time horizon (i.e. 3 months, 6 months, 1 year).

As Tetlock writes, pundits have a tendency to be vague. They usually won’t provide a nuanced probability. And they rely on a vague or gracious time horizon. It gives their predictions wiggle room to be correct. A simple example is saying, “There’s a 50 percent chance of rain in New Jersey during 2017.” And for the most part, we accept it.

Computers and algorithms, on the other hand, are pretty precise. And we are taught in data science to continually improve our algorithms. We don’t consider being “wrong” failure. It’s learning. Interestingly, Tetlock confronts this very issue. He found forecasters who tweak their predictions are more accurate than those who don’t.

In all of this, I do see some inescapable irony. There is some resistance to algorithms in education despite humans not having stellar track records of predictions. After all, there have been major events in the last 30 years we humans didn’t predict, even while in the midst of them.

The good news is we can learn to be good forecasters. We can and should teach our students how to be good forecasters and not completely outsource their thinking. I realize a current trend is to teach students computer coding. That may satisfy a predicted shortage of workers in the short-term.

But I think we owe it to our students to think bigger — let them make that prediction for themselves, with nuanced probability and a time horizon of course.