On the Value of Command-Line “Bullshittery”
Go read this first for context. Done? Good.
Stop me if you’ve heard this one. A mechanical engineer, a chemical engineer, and a computer scientist are driving down the street. The car breaks down and all three get out to see what’s wrong. The mechanical engineer says, “I know what’s wrong, it sounds like the the piston rods are misaligned, if we just fix that it’ll work again.” She takes out her tools, starts messing with the engine, and 30 minutes later the car still isn’t running. The chemical engineer says, “No, no, it sounds like we’re having a problem with the oxygen mixture, it’s too rich and we just need to tune that.” She pulls out the oxygen sensor and starts tweaking, but 30 minutes later the car still doesn’t go. The computer scientist says, “I don’t know what you two are talking about. Just turn the car off, slam the left passenger door three times, walk around the car twice, turn it back on and it will work again.” They do that, and lo and behold, the car starts.
Welcome to the CS experience.
There’s an argument that dealing with this kind of “bullshittery” (nonsensical, unfriendly artifacts and interfaces) is a tax that you have to pay to get work done. This argument says that a good learning objective (and stopping point) is the procedural thinking that allows you to pay the tax and then move on with your life. I don’t think this is right.
I think that not making students struggle with command-line bullshit is a missed learning opportunity which has long term consequences to both their careers and — this is going to sound sensationalist, but bear with me — to the academic field. Now here’s the twist: I don’t actually care about the command-line part. First, what I care about is the “bullshit” and the coping mechanisms around it (meta-cognition processes). It’s just that I think the command-line bit is actually a relatively safe place to struggle. Second, I don’t think that it’s actually bullshit. It’s a result of philosophy that we may not agree with but still evolved (and is in wide use) for a reason.
If you look at most learning objective frameworks you’ll find that there’s a spectrum from the factual (type “git clone”), to procedural (if you see this error message, make the following fixes; if you see this other error message, make these other fixes), to the meta-cognitive (reflect on your progress in fixing the error and adjust the procedures that you need to perform). The differences are subtle, but important. All these skills need to be taught, but depending on what your goals are, you can stop at “low” levels (e.g., factual or procedural). My belief is that researcher education (teaching people how to be researchers), can’t stop there. Specifically because we, the research community, work at the extreme edge of unfriendly software, meta-cognitive skills are critical.
As software becomes “friendly” research students are actually suffering because they have no high-level coping skills (these often come from meta-cognition). These are the undergrad and masters students who are shifting to work with “sort-of” functional open-source software. But more critically (at least for this post), they are my PhD students who need to use cutting edge research prototypes. These come from somewhere else and barely compile and then only if you turn the computer off, walk around it twice, say a couple of prayers, wait another 10 seconds and then turn it back on again (were you waiting for me to tie in that joke?).
The problem isn’t that there is some magic invocation that makes this work, the problem is that the next thing you download will need (what looks like) a completely new magic invocation. This is inevitable. It is inevitable because we work with other people who are not software developers who have been trained in the best possible procedures. They’re other professors and PhD students who have other things to worry about. This even goes for open-source developers. God bless them, they make some awesome stuff (thank you open-source people, I love you!). But they’re also not particularly incentivized to keep documentation up to date or the software running on the particular machine with the random device driver that got automatically downloaded the other day.
Which brings us to this reality. One day, and I pretty much guarantee it will happen, your student will download something from the Web and they will get stuck. They will get stuck in a way that the recipe you taught them on day 1 will not work. They will get stuck so badly that, (1) they will need to come to you for help, and/or (2) they’ll abandon the thing they downloaded.
Asking for help is not necessarily a problem, but it really can be. When the student shows you what they’re stuck on, you’ll start to apply your years of experience and learned meta-skills, Google-fu and whatever other tricks to make the thing unstuck. This is not 3 minutes of work and forces the few experts (read: you) to be the rate limiting step. Clearly, a not ideal, and not scaleable solution.
I’m a fan of these philosophies of management where you’re successful if you’ve somehow managed yourself out of a job. Giving other people the skills to do what you do is critical both for their development and your “company.” This can’t be achieved by simply teaching them how to run “git” but by giving them the skills, so when they download some software that uses darcs because the software’s developer loves it, the student can figure it out. Yes, it’s unpleasant in the short run. The easy solution would be just to teach “git” and be done with it. But I think research is about the long game (also: at some point they need to graduate and teach the next set of students).
Which brings us to the second possible reaction: the student abandons what they downloaded and goes to do something else. I think this is tragic. Part of being an academic (at least in the academic engineering disciplines) is that you get to (a) build off the work of others to create new knowledge, and (b) compare your system/solution to others (yes, to create new knowledge). If you can’t download and compile and use that crazy piece of software that uses some LISP library that is only popular among Germans born between 1980 and 1990 but that happens to be the best “piece” to your research puzzle, that’s a failure. Sure, it’s their failure for not making it easy, but that’s not something that is in your, or your PhD student’s, control. If the student abandons this best fitting piece, the work is simply not as good as it could or should be. Scientific progress is made by building on the hard work of others and that, unfortunately, requires a certain perseverance.
Even if I can’t convince you that your student will ever find themselves downloading some crazy bit of code, hopefully you recognize the other kinds of “bullshit” a researcher will encounter: weird pseudo-code in a paper with parameters that seem defined by magic that you need to implement, or pieces of code that need to be extracted from something else, or someone’s ill-documented source code that worked yesterday but doesn’t today. If you live at the edge, you need to learn how to deal with bullshit. The general coping skills that let you deal with the bullshit, as well as specific computer-science coping skills, are hard to come by. If these aren’t taught early, in relatively low-stakes, easy-to-fix environments, it will only be worse later on.
Safety in Command-Line
Even if you believe that command-line “work” is bullshit, let me offer that command-line bullshit is safe bullshit. It’s safe because it’s the first step that every developer has to take and there’s safety in numbers. People document their efforts and create bread crumb trails for others. This kind of information is (relatively) easy to find. Yes, it’s frustrating, but unlike our crazy LISP library (which may frankly be an impossible task), getting git to run or invoking commands so they don’t stop when you log off is really much more doable. Which would you rather learn meta-cognitive skills on?
So here’s the “rub.” How do we balance the meta-cognitive learning with our desire not to alienate new people. Here I’m in complete agreement with Philip’s argument. If you assume everyone can be, is, or wants to be, a Unix hacker you’re losing out on a lot of talent. Achieving both (training and non-alienation), I think, requires a kind of scripted chaos. The chaos you won’t have to spend too much time looking for (just think back to the last time you cursed at the computer under your breath). The script is where it’s at. It allows you to walk the student through the “bullshittery” and demonstrate not just the factual or procedural knowledge that can fix it, but imparting the meta-cognitive skills that will work when the facts and heuristics fail them.
If you watch Philip’s video, which I assume is a model for the procedural learning experience, there’s a point about 17 minutes in where the system “barfs” an error message (the Python file was written for a Windows machine and so the Mac doesn’t understand the path). There’s a fix which is demonstrated (open up the text editor, replace the path, etc.). The problem is, nowhere do we learn how he knew to do this. The stuff in the video is awesome (I’m going to start sending students to look at it) but there’s an if-this-then-that feel to parts of it. Basically, the learning objective is procedural: “just type X when you see Y.” To be clear, I think this needs to be replaced with: “you saw Y, here are some strategies for figuring out what Y means that will eventually lead us to X.”
So here are some things that either I’ve seen other people do or have employed myself. I’m looking for more.
- Get them to buy into the fact that bullshit (read: diagnosing and debugging weird things) is a part of life in the world of computers. There’s always the temptation to use the parental, “because I said so” or the math teacher’s, “you learn this math so you can learn harder math.” I think imparting the truth that this work is a part of our world but you can find ways of coping with it is crucial. This one is hard. I’d love to have more easy to parse examples that “show” rather than “tell” this reality.
- Incentivize the search for an answer. This one I employ in my classroom but think it also works in research situations. I try to give people credit for making an attempt at finding the answer. For example, if they can point me at the Stack Overflow page that has something that might be turned into an answer: (some) Credit. Point me at the right API documentation page: (some) Credit. Learning how to “translate” a problem into a Google query and being able to detect the answer in a pile of junk is a life skill. By giving people credit for learning this we are motivating the learning of a hard skill. One that is not purely procedural.
- Show them how you (oh great and powerful one) deal with failure. I used to have pre-made powerpoint slides that I could walk through that would just work. This was great for teaching procedural skills, but the students would completely miss the fact that I didn’t remember half that stuff when I wrote the slides and had to test-and-check 5 things, use Google, and look up the documentation. It also made it seem easy and therefore a personal failure when they couldn’t do it themselves. I’ve since shifted to “live coding.” Which means that I fail in front of them, and then show how to recover. It’s not even that hard. My crappy memory and bad spelling does most of the work for me. I should turn off auto-complete to make it even more exciting. I try to say what I’m looking at when I read the page, how I interpret the API documentation with 500 arguments (trick: ignore the ones with defaults to start), and how I decompose the problem to test-and-check.
- Control the script. One of the things we often do to make problems “real” is to inject bad data or bugs into the code. I think the same idea can be applied here. You can control the amount of “struggle” one has to deal with by scripting mostly everything except for a couple of critical pieces that get the student to learn. I think you can find the right level of “hard” for a student. Enough that they learn something and come away feeling good that they dealt with the obstacle. A faculty member I really respect, has a “packaged” problem for new researchers. A paper, some data, a bit of code, and some test script all set up that he hands over to new students. In part this is a “test.” Can you really do the work? But I think this can also be made into a scripted learning opportunity for the meta-cognitive tasks.
That’s what I have for now. It’s not a magic solution, and clearly will not work in every situation, but I hope to build a set of strategies over time. Have any ideas? Please share them.
But for now, as long as my students work at the edge, bullshittery or not, I’m going to include meta-cognitive learning objectives in their education. They’re going to need it.
Footnote: Why I May be Wrong but Think I’m Right
Let me say that this post is not about:
- The fact that I had to walk uphill 4 miles (in the snow) each way on my way to school and so you should do it too. I recognize that I’m old(ish), that software has changed, and that my memory is biased.
- That I think the command-line is the best thing since sliced bread. On the one hand, I do have the Unix barf bag hanging in my office. On the other, I do like the command line and see value in it. But again, I’m also one of the insane people who like Perl. I recognize the possibility that I’m crazy.
- That I think students should be hazed as part of their indoctrination (mostly I think cultural “initiation” done in this way is dumb). I recognize that “hard things” can have unintended (read: very bad) consequences.
- That I think this applies to everyone. This is about teaching future researchers how to be researchers. I’m guessing some OS researchers will smirk at the thought that this conversation is even needed (yes, yes, we get it… you’re hard core). Some of the lessons do apply to other kinds of students, but I recognize that there are fundamental differences in the experiences of the “student” and the learning objectives of the “instructor”.