Quality > Quantity

Loren Kohnfelder
6 min readMar 14, 2022

--

Honing a Japanese knife.

Soon after publishing a short tale of digital disappointment, I’m thinking about better paths forward. While I would never argue that it’s simple, I have a hunch that the answer is to build better quality software — and the way to do this is to build much less of it.

But what exactly is “better”? My answer is that while this remains subjective, the prime directive must be to serve end users well. For media and communications software (to extend from the aforementioned article), first and foremost that means interoperable, reliable, free, and easy to use. Today’s digital landscape is anything but that, and if we don’t somehow set aside the profit motives of software companies (the current prime directive) we are going to be stuck in this dystopia of gatekeepers requiring us to all maintain numerous accounts with each entity (agreeing to one-sided terms, often paying recurring fees, and so on) while they cash in on networks effects and assert lock-in just because they can.

We as consumers could force this any day we like, but the reality is that most people don’t want to think about this, much less forgo connections and entertainment to make a political gesture that without collective action would have no discernible effect.

And how exactly do we create better software by making less of it? This part is even more challenging because it would require a sea change in the way software is conceived and built. Fortunately, it has actually been done before, for example in the case of the web browser. The history is a little messy, and I don’t mean to suggest that only one web browser has even been developed, but I think that I can explain it.

Anyone who remembers the early dot-com days when telephone modems made that distinctive sound connecting to dial-up internet service providers will remember the explosion of PC apps. People bought all kinds of special purpose software, and new PCs often sold with CDs full of apps that did all sorts of things. Had the World Wide Web not been invented, we may well have required separate apps for different incompatible internet services from various corporations and enterprises.

For a time, internet services such as AOL or Prodigy or Compuserve offered proprietary software to use their offerings. At Microsoft in the late 1990s, I was the program manager coordinating with these companies to integrate with Internet Explorer as a more standard hybrid — but the service providers still insisted on customizing the browser for their customers. Fortunately, all of this went by the wayside when it became clear that a single standard browser that worked for all websites was best.

A parade of browser implementations followed and Internet Explorer has long since been dethroned, but the great achievement is the interoperability. Not that there is only one browser, but that with one browser an end user can suft the web in its entirety.

To be completely forthcoming, a few uppity websites have strict browser requirements but these are exceptions to the rule. Rarely these may have a good reason, but for the most part this just means they have cut corners on compatibility and testing. While the precise definition of a compliant web browser remains subjective, there is a good consensus and at minimum webpages should gracefully degrade and remain usable with the loss of some fancy bells and whistles. In any case, these may be considered exceptions to the rule — or a hard-nosed interpretation would just be that these websites are broken.

But web browsers are just an example of how we build interoperable technology — all the world’s websites — on top of common infrastructure. Modern web browsers are impressive apps both in terms of their functionality and reliability. Compare this to the nightmare of every major corporation having a custom app for their corner of the internet. It took years and countless hours of development and testing to achieve our top browsers, resources only available because they are shareable across the entire web.

What does this mean going forward, beyond the browser? In the article referenced, I mention instant messaging, video calling, and streaming, so this would mean a common client for each category that all competing services would interoperate with — instead of numerous incompatible apps by service. One way to do this would be to provide browser compatibility as some do already (e.g. meet.google.com for Google Meet video calling), but if this is too difficult or inefficient then one client per category of offering would be huge progress. (Anticipating a possible confusion, providing a thin client that requires proprietary plugins is not much better than separate apps.)

Having worked for many years in software security, I’m instinctively defensive about software use. On the web, I get to choose my OS and my browser, which is a fair deal because nobody is telling me what software I have to use. Currently, if you invite me to a Zoom call, you are forcing me to install their app. (Not to pick on them, but they are featured in my recent experience, and Zoom weathered major security failings in 2020–2021. Notably, Zoom’s security issues persist into 2022.) Just as I would never invite friends to dine with me at a restaurant with a reputation for giving diners foodborne illness, I wouldn’t want to require them to install software that might open them up to a malicious attack. Far better, both practically and ethically, to let them choose their own software, and interoperability ensures just that.

Naturally, providers want to make a profit, and they can do this by charging a fee for hosting or for content, so long as the client is free and open. As an analogy, Adobe gives away Acrobat Reader so everyone can see PDF files, but charges for full Acrobat or otherwise licenses the technology to create PDFs.

Software standards have a complicated history and certainly have their shortcomings. So an important question to address is when it makes sense to offer non-standard functionality in the interest of innovation. There is no easy answer to this subjective question, but I think that the pragmatic answer is to leave it to the end user to choose. That is, the basic functionality should just work — that’s interoperability — but if someone installs the special app in order to get some enhanced features, that’s fine: just don’t force them. For multi-client apps like video calling, if special software allows you to augment our video signal giving you cat ears, be my guest so long as I can see it with a standard client. If someone invents a way to shake hands with haptic gloves and special software, go for it, but on my standard client please display a message “shaking hands now” (with a link explaining how I can join in) so I’m not completely out of the loop.

That’s my radical vision that I believe would transform the software landscape. Interoperability does require standards, specifications, sharing common components, and interoperability testing. Yet if we shared common code we could focus on building a few world class implementations instead of dozens (hundreds, or thousands?) of mediocre ones. End users would love the interoperability and simplicity — one free install, one user interface to learn. We can build this, all we have to do is give up dreams of world domination with proprietary software and protocols. If whatever features distinguish one competitive digital service from another are so amazing, then offer basic interop and try selling that technology as an upgrade.

Done right, we get rich digital services that (unlike today) just work.

--

--

Loren Kohnfelder

Author of Designing Secure Software: a guide for developers. Find me at https://designingsecuresoftware.com/ Writing software since 1968. Living on Kauai.