Code Reviews: A Security Best Practice

Robert Glenn
DevOops Discourse
Published in
6 min readMar 18, 2023

The views expressed here are my own and in no way represent those of my employer, Accenture. Moreover, they do not necessarily represent my behavior given contractual or cultural considerations. This blog is purely intellectual; the ideals expressed here may require strategic consideration. Proceed with caution.

Generated by DALL-E

There is a dwindling effort put into collective code reviews. Instead, we have a couple of senior developers designated as “approvers” to give their blessing or redline the effort. Sometimes this works extremely well right up until one approver goes on vacation, gets promoted, or leaves the firm. Most likely, we need more stringent oversight, and perhaps even government-enforced regulation. We need to treat code reviews as a security best practice, not a “nice to have”.

There is too much focus on the rock stars and unicorns: those devs and teams that can move mountains, working 12+ hours a day for weeks on end. We elevate these maniacs[1] to god-like figures; sages of software. We stroke their egos and endure their tantrums. We sacrifice them at the altar of arbitrary deadlines. Burnout happens, the gods fall from grace, and tribal knowledge is lost.

Rather than harvesting that tribal knowledge regularly, diversifying the responsibility, and ultimately lowering risk, we choose blind faith[2]. If we instead share the message, we can lead others to enlightenment. Even if the code is perfect or from Linus Torvalds himself, a collective code review is extremely valuable: in this case not to correct, polish, or standardize, but to teach; to familiarize the less experienced with what you consider to be the gold standard.

Unfortunately, we have deprioritized collective code reviews in favor of rapidly producing “working” software that hopefully doesn’t piss off QA or end users. Meanwhile, we’ve flooded the market with boot camp graduates, which, by design, haven’t completed multiple years of Computer Science coursework, and by virtue of this design, were never exposed to many concepts or techniques that aren’t immediately necessary for churning out heavily guard-railed products. Add that to the recent cohort of college graduates who entered the workforce during the Pandemic, and were never properly oriented, and we have a waxing workforce with a waning average skillset.

Did we start those onboarding programs back up? Did we go back and provide this orientation to those who were missed (or provided watered-down virtual versions thereof) due to the necessary evil[3] of social distancing designed to maximize personal health during the Pandemic? Are we all back in the offices, yet? If not then these folks have and will have a higher hill of skills to climb. This puts even more emphasis and weight on the rock stars’ work.

We’re going to start feeling a talent squeeze: the prodigies that don’t burn out completely will get promoted. At some point, they will be asked to trade their IDEs for bundled “productivity” software: to sit on the throne and let the next generation wield the sword. Will that next generation be ready? On the flip side, will would-be excellent leaders stay in the trenches because they can’t in good conscience turn the command over to what they perceive as underprepared subordinates?

This will also bite us when we determine as an industry we are ready to automate large portions of code authoring. Because we don’t review code anyway, we won’t care if we can’t read it…right up until there’s a bug that the model hasn’t encountered or until we realize we’ve mistrained the model, over-fitting to a bad or incomplete set of non-functional requirements. If this comes to pass, we can expect to negotiate steep signing bonuses for the scorned divas of software development we’ve since fired and now beseech to deliver us from such cataclysm[4].

With the dilution of technical prowess and the advent of AI-generated software, I believe we need to consider moving to an approach more like the airline industry’s to piloting: automate everything but have an expert human be present to review behavior, intervene when necessary, and ultimately be held accountable. In other words, we need to get used to reading someone else’s code. Some day it might be the only job left for humans[5] and that’s only if we establish the behavior as the standard operating procedure, now.

Today, we lean too heavily on individual accountability. We obsess over individual certification. The only criteria for a hiring candidate to be a “good fit” is that they have the skills on paper and they don’t creep out the existing team members. We oftentimes don’t even consider other interpersonal capacities: mentoring skills, code readability, documentation approach, etc. Sometimes we get lucky, and teams evolve to be more than the initial sum of their parts. Other times we get what we asked for: individuals who burn bright with no regard as to where their embers fall. We build entire teams to address the collateral damage the superheroes leave in their wake.

The masters may be the later casualties in the war of attrition we will wage against automation, but if the junior devs get replaced by AI, it’s not long before the models get to the 10,000 hours to reach mastery[6]. This won’t be a Big Blue beats grandmaster showdown. At some point, and at an exponentially accelerated rate, there might not be a competition, only layoffs[7].

Ideals and ethics aside, we must move towards a practice that better shares accountability to avoid a situation where it is locked up within an AI model when it eventually replaces the human rock stars, too. I believe code reviews (with the entire team, not just one or two designated approvers) can help bridge the transition without having to retrain (or fire) everyone for a future that has no or few human coders.

It doesn’t worry me that ChatGPT et. al will write code. What worries me is that we are running headfirst towards it writing unintelligible-to-humans code. If we allow AI to generate machine code (outside of curiosities) we will find ourselves at an Oppenheimer level of hubris. We will become destroyers of our own profession. Indeed our worlds will be upturned.

We must get our collective acts together, treat code comprehension as a security factor, and approach it with 3rd-party enforced and audited standards. If we instead continue to treat all “working” software as a black box, we do indeed run the risk of a singularity with the application of AI: one of comprehension.

In which case, may whatever gods we create be kind.

Footnotes

[1] Don’t @ me, I was one of these nut-jobs in the past, and I’m sure I’ll do it again because sometimes our industry kinda sucks.

[2] Faith that we can complete a milestone before burnout happens. Faith that we can hire and train a new rock star before the next milestone looms.

[3] Too strong? I’m more just employing the colloquialism than intending to pass judgment. I firmly believe in staying in one’s lane and public health policy sure ain’t my lane.

[4] Assuming this even happens within a few generations.

[5] This is not a joke.

[6] Ok, this kind of is a joke. I’m sure there’s a reasonable Gladwellian scale one can assign to at least certain implementations with enough observation, but I don’t know that it really matters.

[7] I suspect there will be more friction and slower uptake to replacing junior devs than there will be replacing senior devs. In the former, organizations must accept entirely new risks and costs; in the latter, they are merely extending an existing working model (obviously, I’m making a lot of assumptions here, thus the label of “suspicion”).

--

--

Robert Glenn
DevOops Discourse

Technology Crank | Digital Gadfly | Unpopular Opinion Purveyor