Is this assertion actually true? I’m not sure it is. People such as Michael Lopp, Lara Hogan, Camille Fournier, and others have been moving the practice of engineering management forward in leaps in the past years, and their work would need to be considered as part of any minimal definition of where “the bar” is.
Indeed, I agree with this. In fact, it’s a key idea behind my master’s thesis: http://bit.ly/AllspawThesis
My issue is that there is much that is missing from the dialogue here (and in the ACM article’s perspective) about what goes into the abductive reasoning that happens in people engaging in anomaly response.
Is this true? A fault-tolerance approach of component redundancy *increases* complication through the addition of components and new forms of behavior (such as health checking, failover, etc.) yet at the same time can increase the reliability of the service it delivers.
I’m guessing what you mean by simplicity is simplicity of being able to understand the system’s details and behaviors?
I do not believe this is an objective truth. Related: https://twitter.com/allspaw/status/864447734744580096
The amount of code certainly can be seen as an influence on the cognitive effort to understand what the code should be expected to do under known and imagined conditions, but I don’t think this is an assertion that can be made absolutely.
I agree with the sentiment, and I’d like to contribute to a future world where Engineering is seen more broadly as a discipline that *includes* understanding people. :)
I’m actually speaking tomorrow on this exact topic, inspired not in any small way by your work, Indi. (https://craft-conf.com/2016/#speakers/JohnAllspaw)
I think this is an excellent post and not unrelated to qualitative analysis and ethnographic coding. I’m doing something very similar for my master’s thesis: analysis of IRC transcripts of engineers responding to an outage.