Complexity and Impostor Syndrome
I’m a pretty accomplished engineer. I have degrees from Cornell and Stanford; I’ve built data processing, computation, management and visualization software for genetic and genomic data; and at HealthLoop I’ve led the creation of the patient engagement software that’s currently a front runner in the field. I have literally helped make peoples’ health better, and if I died tomorrow, I’d be proud of how I’ve spent my career.
I’ve also been told a few humbling things along the way, usually by people who were declining to offer me a position, and who always made me feel like I didn’t belong in “serious software development.” Once, I was told that I “could be a pretty good developer if I were in a more rigorous environment.” Once, I was asked about thread management in an interview, and while my answer was probably between a B- and a B+, I added that I’d be judicious in my use of threads due to difficulty of testing and debugging, opting where possible for multiple single-threaded, single-purpose processes…and I got laughed at (literally).
One thing that these companies had in common is a ton of seemingly needless complexity. The first boasted about their 600% test coverage (I’m not kidding) — one of my disagreements with them was that I prioritize DRY (don’t repeat yourself) even when it comes to tests. If a change in code requires several mocks and stubs to be updated, haven’t you lost encapsulation, and aren’t you repeating yourself needlessly? And by the way, what exactly are you testing?
During the interview, we pair-programmed a new method. I was given the requirements, my pair (an employee and interviewer) wrote the specs, and I wrote the method. Because I know what the fuck I’m doing, I handled edge cases. Still, he said, “I bet this spec will fail” and wrote a spec to test an edge case. The spec passed, because good software is good software regardless of your trendy methodology.
“You could be a pretty good developer in a more rigorous environment.”
They’ve seen through my charade, I thought. They’re right; I’m not great at writing automated specs like a real developer should be.
The second company was in a niche market and was never going to need massively parallel request processing. Nothing worse than prioritizing scale before you have an opportunity to scale, especially when your best-case scaling scenario could probably be handled by four load-balanced web servers and a decent database, with a few single-threaded processes handling web requests (pick your language/framework, it probably doesn’t matter). They ended up shutting down that business unit before their primary (incredibly, elegantly simple) business was acquired by an industry behemoth.
“Ha ha ha ha! You avoid using multithreaded code!”
They’ve seen through my charade, I thought once again. They’re right; I should be able to manage and debug multithreaded applications if I were a real developer.
When I was just starting out at Incyte in the mid ’90s, I’d been involved in the company’s expansion from one to 17 products. For each product, there was a set of scripts that essentially compared DNA sequences to databanks of other DNA sequences, for the purpose of data cleaning and annotation. This led to 17 different, but eerily similar, scripts floating around the company — and this was LONG before git and forks and pull requests and all that (we used RCS — it was rad, in the sense that no one says “rad” anymore for a good reason). The rapidly expanding group of developers met up to see what we could do about this proliferation, and one of my colleagues came up with a solution whereby a single script could handle all 17 cases, plus any other hypotheticals we could conjure up, by way of a config file that went so far as to define a DSL so it could handle all the edge cases.
My colleague was (and is) way smarter than me; he had a physics degree and really understood the theory and science behind what we were doing. He also had a ton more experience. I didn’t understand his solution at all, but I figured that was my own fault — I was the impostor who was only hired because he talked a good game and could hack together a Perl script. The next day, one of the senior engineers asked me:
“Did you understand Fred’s solution?”
“Could you write a config file?”
“Part of it, but not really the [DSL stuff]. But I’m sure I could get it with some help.”
“Or maybe it’s just too complicated. You have to propose a simpler solution.”
I created a dirt-simple solution that covered about 80% of the use cases, and when I presented it, I got the expected push-back on the other 20% of the cases. The decision from the group was that Fred and I were to “duke it out” to achieve the capabilities of his solution with the simplicity of mine. We did, and the result was really good. Was I still an impostor, or did I bring something of value?
So I’ve changed my story from Impostor to Superpower.
My superpower is that I’m too stupid to understand systems that are too complex. It sounds more like a liability than an asset when I put it that way, but think about it: what good is designing a complex system if average people can’t use it, sell it, explain it, support it, and extend it? There are cases where this level of complexity is necessary or beneficial, but in many cases that people think of as complex (space navigation? autonomous vehicles?), the need for simplicity and/or elegance stands out as the main priority. Maybe you don’t want me architecting your virtual reality environment, but I can live with that limitation, and will happily refer you to Fred (who really is a great guy, very talented, and incredibly smart).
If you’re a simple thinker, you’re not an impostor. You’re the one that should be architecting the mission-critical system. If you can’t understand how it works, almost no one else can either — so decompose it till you understand it, and build it in a way that makes sense to you.
Someone else can always make it way too complicated later.