What a wonderful list of questions on computational thinking!
Tech-Based Teaching Editor
1

My discussion group is meeting on Friday, so I’ll let you know! I’d planned to revisit some of this and share some thoughts after I’d left this for a week or so. But I’ll go ahead and say a few things now.

Personally, I think when people talk about computational thinking, they mostly have the right set of ideas in mind — decomposition, abstraction, and modeling. There’s another bit, though, sort of underlying all of these, that’s harder to name. It’s the sense that, instead of trying to *first* understand and *then* formalize, we can build understanding through the creation of a formally validated symbolic representation. (There’s a beautiful section of the SICP text in which the application of more and more abstraction to a procedure for computing square roots leads to exploring its connections with important ideas like fixpoints, signal damping, and derivatives. In lecturing on this at IBM, Abelson commented on how this lets us comprehend the meaning of Heron of Alexandria’s process, in a way that even Heron could not access.)

The validation is necessary, by the way, as a crutch, because we’ve left the realm in which intuition works well as a guide. Having some intuition is a good thing, but when trying to state things formally, we always make mistakes, and they are usually not even subtle ones. They are glaringly obvious things that make our work nonsense even if we know what we meant to build. All computation environments do some amount of validation, even if it’s just checking syntax, or making the bad behaviors of our broken models obvious and observable. Some do more than that, such as checking consistent use of types or abstraction boundaries.

This is, I think, is one way in which the computer is relevant to computational thinking. Without constant validation, we still go through qualitatively the same abstraction process, but each step toward abstraction introduces a new possibility of error. The work of carefully verifying each step grows to overwhelm the invention of new abstractions, limiting its feasible scope. Similarly, computer scientists are by no means the first to approach difficult tasks by structured decomposition. Yet, when you are designing a bridge, the work of the engineer is necessarily dwarfed by the hundreds of laborers pouring concrete and welding beams, not to mention mining iron or milling parts! It gives quite a different perspective when coping with complexity is the *main* determinant of success or failure.

The problem is that people say all these things, but then they teach what they know. And for many of us who were trained as computer programmers, what we know is the *practice* of computer programming. Some of that practice is relevant to computational thinking, but much of it is an accident of history, or a consequence of which company had more marketing money in which decades. Even when it’s quite justified as a technology, it’s quite a different thing whether a skill is part of that *core* that gets at the essence of how computation opens up new worlds.

Computer programmers are attached to their tools, but the UNIX command line, integrated development environments, and revision control systems are surely not part of that core. Maybe there’s not much debate on that point. Loops, though, are a main player in even elementary school computing activities. Should they be? They seem to me to have been invented entirely for the practice of computer programming, and are not at all important to abstraction, decomposition, or modeling.

Alas, it seems I have written a blog post in this comment. I might recycle a lot of it in my later recap after having thought this over more!