software quality and ideology

Amy J. Ko
Bits and Behavior
Published in
4 min readAug 9, 2010

It’s been an interesting weekend. My post on cultural homogeneity at the Mozilla Summit ruffled some feathers and led to a flurry of fascinating responses on reddit. I’ve replied to many of the commenters who work at Mozilla, trying to understand the anger, confusion, and disbelief in their statements, with the following gist: I love Mozilla, Mozilla’s great, but there are a lot of people in the community who are elitist and condescending towards anyone without coding chops. I still stand by that observation, no matter how hard it is to hear.

One of the interesting points of contention in the discussion was the claim I made about the tradeoffs between openness and simplicity. There was a lot of push back on that one; most of the commenters believed that the two were much more compatible than I suggested. However, rather than try to defend it, I think it would be much more interesting to point out several other tradeoffs between software qualities that I’ve observed, to stir the pot a bit more:

  • security and simplicity. Almost by definition, it’s difficult to design something that is both impenetrable and accessible. This is obviously a gross oversimplification, but easily demonstrated by the concept of a password. What’s simpler, going to gmail.com and reading your mail or going to gmail.com, entering your password, and opening your mail? The two tradeoff with one another in a variety of ways.
  • performance and comprehensibility. The Knuthian aphorism, “premature optimization is the root of all evil” comes to mind. By designing systems that are time and space efficient, we often make systems’ designs more difficult to understand (and I’d argue, more buggy).
  • privacy and parsimony. The most frugal and sparing of solutions, often described by developers as the most elegant and beautiful of solutions, are often in direct opposition to each other. Take Facebook’s privacy setting schemas; the most sparing of schemas are often wholly inadequate for expressing the complexity of individuals’ expectations about who has access to what.
  • configurability and learnability. One way to think of learnability is as the difficulty of learning the mapping between a system’s inputs and outputs. The more inputs you expose, the more mappings to learn.

Of course, these are very weakly defined: it could take decades of writing, thought and experience for to really understand the nature of these tradeoffs and whether they actually exist. I raise them because I think each represents an opposition between two values, and that blind devotion to one over another often leads to problems. Consider security, for example. We have whole communities of researchers and practitioners who amass a great deal of expertise in securing systems. Many of them are my friends. But what I often find is that security is often treated by developers and analysts in these communities as a cause, perhaps even an ideology, rather than just one of many possible software qualities. Case in point: I deployed an internal prototype last winter and had several students use the prototype while I gathered usage information. After an hour, my database was full of lots of fascinating data which would help propel my research on the prototype forward. However, one enterprising user decided that rather than use the prototype, he would test it for security flaws. He eventually found a potential SQL injection attack and decided to test it by dropping my tables — so much for my data. Rather than writing and apologizing for destroying my database, he wrote proudly, declaring that he’d found a vulnerability in my code and he’d be happy to help me look for others. Of course, security was the last thing on my mind, and his single-minded purpose toward espousing secure software systems had led to actual damage to my research.

Now I’m not saying that security doesn’t matter. My claim is that what matters depends on the situation. I did care about security in the story above, but not as much as I cared about getting the data and saving time. Given limited resources, this all developers have to make tough choices between competing values. What I find problematic about many of today’s developer cultures is the belief that any one software quality matters more than another in all situations. It’s been true for performance in the academic computer science research community; it’s been true for parsimony and elegance; it’s increasingly true for security, privacy, and yes, even usability. The world isn’t that simple, and any belief that one value ought to always take precedent over another does a disservice to the users who operate in situations with different priorities.

I’m certainly not claiming that all developers believe this. If there’s anything I’ve observed amongst all of the developers I know, its that the more experience they’ve gained, the more they realize just how difficult it is to clarify the priorities for a project in a way that is acted upon consistently across a team and over time. In all of the research that I’ve done studying developer discussions around design decisions, this seems to be the central struggle: how do you clearly communicate to everyone involved how various software qualities rank with one another? This is especially difficult when we don’t even have definitions of these qualities that people agree upon, let alone an understanding for how they interact.

If there was any point to my rambling about the Mozilla culture, it was that more developers need more knowledge about the differing priorities of their users and how this differing priorities interact with Mozilla’s goal of espousing openness. I wish that as a researcher I had more to offer on this subject, but alas, I have not. At least for now.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.