
Technical education for a connected world
Who’s going to grade these? I needed a TA. I was 3 weeks into teaching my first full-fledged Computer Science class — Natural Language Processing (NLP) — and I was getting a little desperate. All my students had submitted Homework 1 on time. Great! But now I had a stack of programming assignments and writeups to grade.
I had just stayed up late (after my kids went to bed) for a few nights in a row, prepping Homework 2, then trying to prep Lecture 5. I ran out of time on the latter, so, embarrassingly, I’d had to let class out early after my under-prepped interactive whiteboard lecture on word similarity. Coming up in less than 48 hours was another 90-minute lecture, and I would need to find another 5–10 hours (with my kids sleeping) to prep that. This is ironic. When I was a student, I never imagined that my professors lost more sleep over my classes than I did.
So, grading Homework 1. No time. No TA. No free computer science experts to bail me out. If only my students could grade their homework themselves…
And all of a sudden, it all made sense. Peer reviews!
I turned this epiphany over in my head. This one thing embodied so much of the process I had gone through in entering academia and growing as professional. It was conspicuous in every journal article that I got accepted, and essential in every grant that I got funded. It was embedded, as code review, in the seasoned software engineering process at the tech startup I’d worked at, Trapit. At a broader level, this collaborative collegial activity, this humility to accept and learn from criticism — I needed it in my marriage, relationships, spiritual life, everywhere.
It was too late to do peer reviews for Homeworks 1 and 2. I’d just have to buckle down with a few more late nights to finish that. And I knew that when I first got students to do peer reviews, I’d have to coach them on how to do them — meta-review their reviews. I wondered if this would actually save me time. Certainly not in the short run, I thought, but this is what I want them to get out of my class. More than NLP. Character.
I’ve heard that some humanities classes have you read your classmates’ essays and give feedback. But in my 10+ years of Electrical Engineering and Computer Science training, I’d never done a peer review in a class — those started with the first conference paper I co-authored. So I quickly discovered that including peer reviews in my class forced me to re-envision the whole educational framework.
What emerged from my CSEE 562 NLP class is my modern-day re-imagining of technical education. Following the beloved research tradition of coining esoteric acronyms, I call the whole approach techi education, or techied, calling attention to 5 core values that I believe to be essential in scholarly and professional development. Transparency. Excellence. Collaboration. Humility. Innovation.
Transparency
Now that I’d decided on peer reviews, I first needed a practical way for my students to be able to see the submitted homeworks that they were going to peer-review. That one was easy: every project I’ve worked on in the last 2 years — industry, research, or operations — is on git. Git is a version control system for code, but it’s almost more a way of thinking and collaborating.
The key concept is transparency. Your work is tracked incrementally and made visible to others. It promotes reproducibility because you know others are going to run your programs and review your work. It favors simplicity because any unnecessary complexity becomes a headache. It reinforces integrity because you have nowhere to hide. (AND it enhances employability!)
So I started a git repository, and I announced that we’d do peer reviews on Homework 3.
+ Excellence
But is that too transparent? What’s to keep a student from just copying and modifying another’s work, or at least their ideas? A simple solution: have an initial submission privately to me; then after the due date, make it public in git so that classmates can review it. We did this. Unfortunately, “preventing cheating” isn’t the main thing I was going for.
At some point in your career, nobody cares about your GPA, SAT, or GRE scores anymore. You get hired (or reviewed well) because your skills fit with a project and because your style fits with a team. So I wanted to reflect this in a new, more well-rounded definition of excellence. I wanted my students to investigate and solve, independently. I wanted them to feel like they could build on top of the work they had done in the past. I wanted them to be able to explain that work to others and learn from others.
This kind of Excellence ≠ Grades. So I redefined grading. Drawing inspiration from the rubrics for paper and grant review, I modified the overall grading structure of the class (see Collaboration and Humility below) and gave a rubric for every homework assignment. Objective metrics like “Correctness of approach (30%)” were balanced subjective metrics like “Analysis of results on dev set (20%).” This helped elucidate strengths and weaknesses of each student’s work — and still put a numerical grade on it.
+ Collaboration
When I say I wanted my students to be able to do “independent” excellent work, I don’t mean isolated. I mean that collaborative resources and relationships should be considered, internalized, and utilized appropriately to produce the best result for the task at hand. Peer reviews embed a collaborative mindset. So do interactive lectures, group work during lectures, and team projects, and other strategies. But another strategy arose naturally out of trying to do collaboration transparently: attribution.
Transparent attribution is a far cry from plagiarism: it’s the modus operandi in academia. We don’t pretend that no one else has tackled a problem; in fact, we go out of our way to learn about it when someone else has. When someone else does excellent work and we recognize its value, we cite it, share it, and build off of it. The excellence of the original contribution is measured in citations and adoption by peer researchers.
I decided that I would measure excellence by requiring explicit attribution. “Are you making things that are so helpful to your peers that they benefit from your work?” “Did your peer review catch something an instructor missed?” “Does your assignment help a peer to frame the problem better next time?” If so, other students will transparently attribute things to you, and that goes into a new collegiality portion of your grade. That’s collaborative excellence.
+ Humility
In grad school, a lab-mate peer-reviewed a draft of my conference paper. He pointed at a rhetorical question I’d written, and haltingly told me it was cheesy and pedantic. I was offended. I considered myself a pretty decent writer and I’d considered other wording options. And anyways, this was my paper, so I should get to use my style. But by the time I submitted the paper, I had swallowed my pride and reworded it more clearly and engagingly than before — with no rhetorical question.
First drafts are never excellent. All my best work is the result of iterative improvement, and often, feedback from others. On the latter point, I believe peer reviews make it normal to receive (and give) constructive criticism, humbly. It is possible to take it (or give it) poorly, yes. But in a peer review structure with meta-reviews, at least students are exposed to the process, and can improve that under supervision as well.
To capture the concept of iterative improvement, I decided that I wanted to give my students the option to improve their initially-submitted homework during the 2-day peer review period. I gave a minor penalty so that students would still try to submit their best work on the first try. Their grades, then, partly reflected their willingness to engage in an iterative process of improvement. Even my best students made use of the second-draft rule on occasion.
+ Innovation
Innovation is crucial to research and to the tech industry. Of all the values I’ve identified here, this might be the hardest to systematically develop and evaluate. Can you really expect a first-time NLP student to have an innovative idea that might have legs? And can you really measure how new and fresh something is?
I think we can get close. Idea generation is part of innovation. That needs to be complemented by an analytical discernment, narrowing down to worthwhile options. Then, they need to be fleshed out, implemented, and revisited. There’s a cycle to innovation.
I tried 2 things. First, I designed some of the homework assignments so that innovation was part of the scoring rubric. Second, for our final, we ran a coding sprint for 90 minutes with what I’d consider to be a full cycle of innovation. Group brainstorming, filtering out weak ideas, implementing, presenting, and (you guessed it) peer-reviewing. This isn’t the only way to build innovation, but coding sprints are a quicker way (than, say, capstone projects) to expose students to the full cycle of innovation.
Towards the end of my class, I wrote up some equations on the board and one my students responded with: “You’re taking all the romance out of language! It’s all models and statistics!” I guess that’s exactly what I’ve done with techied, too (though I’d say there’s still plenty of romance in both language and teaching!) I’ve tried to build a framework for technical education that embeds some of the assumptions and needs of our present-day, connected, collaborative world. While I’ve employed specific methods, I’ve written here about values because the implementation may (should?) vary for a different topic or instructor.
My accommodating students helped me work through these issues on-the-fly and then provided meta-meta-reviews of their experiences. But CSEE 562 Winter 2016 was my “first draft,” and I have copious room to iteratively improve. Fellow educators: let’s build a new generation of scholars, coders, and teachers who are people of character. Clone my techied git repository. Build on these ideas. And join me in a new kind of technical education.