Coding bootcamps versus universities

Amy J. Ko
Bits and Behavior
Published in
4 min readMar 25, 2016

There’s been a lot of talk lately about the competition between coding bootcamps and computer science departments. Is this competition real? And if so, what does it mean for computing education in universities?

I do think it’s real, but not in the way that people usually mean. The typical argument is that the only reason to go to college is to get marketable skills and bootcamps provide the same marketable skills for less money and time (without forcing you to learn a bunch of other useless things). This is an easily defeated argument:

  • While there is a lot of learning in universities that has questionable value, there’s is also a lot of perspective shifting, networking, and personal growth that happens in universities.
  • Coding bootcamps (just like many computer science departments), don’t really admit students without a baseline coding skill set; the people who typically enroll are the ones who can already code but want to learn a new platform. So CS departments and bootcamps actually teach two very different sets of skills.
  • People who finish coding bootcamps are still lacking fundamental skills in complexity, scalability, operating systems, software engineering, and software architecture. They skip a lot of things that universities teach because their brief. I have hired graduates of both, and CS majors are unquestionably more effective in the short and long term (not necessarily because of the education happening in colleges, but that’s another blog post).

The competition that I believe does exist is in the quality of computing education. Bootcamps have a far stronger incentive to teach well than universities do. If they don’t, graduates will underperform in the jobs they take,the bootcamp will quickly develop a poor reputation among employers and students, and the whole value proposition of training productive, talented software engineers erodes. That’s a pretty strong incentive to improve instruction, and a pretty tight feedback loop for enabling these improvements.

As a professor at a research institution, I have few such extrinsic incentives to teach well. Because I have tenure (to protect my intellectual freedom), I don’t lose my job if the quality of my teaching declines. Students can give me bad evaluations, but as long as I’m breaking new ground in my field of research, my reputation among my colleagues won’t suffer much, especially since my colleagues around the world rarely have a detailed window into my teaching efforts. Sure, I have to suffer shame every time I explain something poorly to a room full of 70 eager undergraduates, but that’s nothing compared to the shame of publishing embarrassing research, missing a grant deadline, or going to a research conference and having nothing new to talk about. (I should say that I personally care deeply about the quality of my teaching, often to the detriment of my research productivity, but this is often not true for many research-driven tenure-track faculty).

All of the above would be pretty frightening to me if it weren’t for one subtle but important detail: bootcamps and faculty aren’t incentivized by actual teaching quality or actual developer productivity, but perceived teaching quality and perceived developer productivity. This is an important difference: students can feel like they’re learning a lot but be learning little. Engineers can seem effective but be primarily propped up by their teammates. Students and managers are generally poor at seeing actual quality, and they often mistake bootcamps’ selection of skilled developers for the training of skilled developers (just as they do with computer science departments).

This puts me in an interesting bind. As a computing education and software engineering researcher, I’m deeply interested in developing rigorous ways to measure computing education teacher quality and software engineering productivity. These are lifelong research objectives that I pursue tirelessly with colleagues from around the globe. But if we ever succeed at providing more effective instruments to measure these qualities, the world will quickly shift, empowering the managers to find the most effective training (whether bootcamps or universities), strongly incentivizing and empowering bootcamps to find the most effective pedagogy, and threatening to put tenure-track professors like myself further on the defensive.

And it won’t necessarily be researchers that develop these better measures. Software companies have a strong incentive to find better ways to measure developer productivity. Bootcamps have a strong incentive to find better ways to measure their teaching efforts. They may very well invent these better instruments before researchers. (I doubt it, given their lack of psychometrics training, but then again, they just have to feel like their measurements are valid!).

Obviously, given my research interests, I welcome this future, and actively work toward achieving it. But I fear that many of my academic colleagues don’t realize the latent competition lurking beneath the surface of market-driven educational efforts and just how transformative the measurements we are developing might be in unleashing its force upon the academy.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.