The cost of Order in information systems

Joel Grenon
Rethink Software
Published in
9 min readJul 28, 2024

Human society is spending a lot of effort trying to keep things ordered, aiming for an unreachable state of equilibrium in a chaotic universe. We design regulations and processes to organize and help most of us behave in a way that is compatible with the established order. But entropy constantly grows in our political, social and economical systems, which are usually not created to properly cope with unplanned changes, very sensitive to the numerous external stressors.

This applies to any closed system or artificial orderly processes layered on top of chaotic environments. An neat canvas hidding the ugly details of the real world. We might want to believe that we live within the comfort of these rules and guidelines, but beneath the polished surface, the chaos is gradually eating up the foundation of our civilized world, eroding our perfectly orchestrated order, requiring us to spend time and effort to repair and hopefully adapt fast enough to prevent a collapse.

We conceive information systems as orderly constructs existing in an ideal perfectly coordinated ecosystem, operated by well-intentioned humans always taking the right decision; the one that is neatly written in the user manual or company’s policies. At least, we plan and estimate our projects as if it was the case. We idealize Order and try to inject it everywhere in our information systems through data protocols, database schemas, etc. Maybe because of the binary nature of computers, we are biased toward Order, always pushing for more. The larger the organization, the dire the quest for Order! Order is predictable, Order is control; a powerful grip on an illusion of reality, one that lasted long enough for consulting firms to receive their fat paycheck and leave us with a slowly decaying mess.

Why do we architect our systems like this?

The need for Order comes from how we think about software, which was inherited from the first software developers: hardware engineers! Software was born out of the necessity to streamline computers operation. Hardware engineers quickly realized that being able to program computers without having to solder electronic circuits or manually move jumpers, would be faster, cheaper and ultimately, the reason of their commercial success. So they envision the capacity to program them using low-level programming languages. I’ll skip forward the some details here, but they ultimately realized that reusability and, later on, portability were very important software aspects. They created compatibility layers above their hardware, which gradually emerged as operating systems, exposing neat abstractions that programs could use instead of exposing the raw (and ugly) hardware details. This was the genesis of the application concept. A reusable program, persisted on some magnetic or optical media, that could be installed on any computer to extend its capacity, to personalize and adapt it to the business at hand. Because computers were isolated and later on, linked using very slow networks, the need for magnetic support forced software developers to go through a build and distribution phase, bundling everything needed by their program to properly execute on any compatible computer. Builds were required at the time and this paradigm stuck over time to be used even today in a world of fast global networks.

So we think about software like if it was a manufactured hardware product. The software industry, lead by Microsoft and a bunch of similar packaged software vendors, took control of the distribution channels, sponsored college training and imposed this product vision to everyone using their platforms. They didn’t have the perspective to reinvent software at the time and, to their defense, the evolution of our industry came so fast and there was so many new thing to ingest, that keeping the existing paradigm was cheaper and better for their investors. We came to accept the software as a product paradigm as the only way to think about software. An offline artefact in an online world. A misadapted tool to solve today’s computing problems, force fitted by greedy and lazy vendors and consultants.

The Application Model

Having been involved in this industry since the mid 80s, selling off-the-shelf industrial software worldwide in the early 90s, I’m baffled by the fact that we’re still using the same paradigm today to solve very different problems we used to face decades ago. Intuitively, there are a few reasons that come to my mind like vendor domination, their lack of interest to spend money on something that would destroy their business model, a bit of laziness or plain complacency.

For those of us who were there, we remember how Netscape came close to challenge this model in the mid 90s and how the established companies fought back, pushing their built application model to the Web, hijacking one of the most beautiful and powerful concept, transforming it into a sea of semi-isolated pre-built web applications, creating barriers in a web that was designed to be open and could have been a great foundation for modern software innovation. Application vendors (a.k.a Microsoft) exported their inherited offline prebuilt application model to the web and it paved the way to a new breed of systems. I remember that they were called dynamic web applications because they rendered HTML on each request. But conceptually, they were immutable, semantically and functionnally static, being built and deployed, just like their desktop counterparts.

The fact is that you’re still installing applications on your phone, that your digital profile is fragmented between more and more web applications and that powerful new concepts like decentralized computing are trying to replicate the classic computer architecture and forces you to prebuild smart contracts… is a sign of software vendors’ success to impose this way of thinking about software. Orderly closed systems everywhere, prebuilt, based on a partial snapshot of the reality, misaligned with our online collaboration and communication problems, stretching our capacity to freeze reality, not even talking about upcoming AI stressors.

The cost of misalignment

There are two major problems with this inherit software paradigm. It is based on single computer operating systems and is using immutable programs running on this single device (your laptop, phone or any cloud servers). 50 years ago, single computers were the whole universe, so programs didn’t had to have any additional context and could run under the supervision of the operating system, controlling access and protecting the integrity of both the computer and the user. Programs were run with strong identities, recognized and enforced for this single computer. Life was simple.

But today, the first thing you do when you buy a computer is… connect to another one! Most users buy powerful computers at big expenses and use their web browser 90% of the time… Even for phones, there’s a whole operating system managing device access, still using files and folders, while 99% of your activity are somewhere else. It means that these device operating systems are oblivious to your activity, they can’t protect you, they can’t enforce your identity without the help of an external service. They are nearly useless! You don’t even store your files there anymore! There is a very big misalignment between our computing devices and the problems we’re trying to solve today. Look at all applications, desktop, mobile, web and even decentralized ones, they’re all enforcing their own universe, borrowing a few local resources from the operating systems when needed. Each one was built with its own reality snapshot in mind and any integration points must be planned in advance, with clear protocols, each of these orderly realities facing the chaos on its own, requiring effort to be maintained, to avoid being crushed by the forces of chaos, contributing to your chaotic environment as you are the one, as users, forced to make sense and integrate all these disparate contexts.

Reduce Order == Reduce Complexity == Reduce Cost

Prebuilt applications are everywhere and there are millions of bright people working on better tools, better frameworks, better runtime or better processes to keep building them and hopefully reduce their cost and ease our capacity to adapt them to the underlying changing reality. But sadly, I think that we’ve exhauted our tools to try to keep them afloat and find ways to virtually close the gap between reality and the snapshot used by our millions of applications. We can’t find more bright people living on 2$ per day to give the illusion of efficiency like we did in the early 2000s. We don’t have easy fixes for the growing number of security breaches, for the lack of shared strong identity providing a cloak of anonymity leading to fake content and other socially disrupting behaviors. We can’t avoid the growing cost of the dozens of specialized consultants needed to create, deploy and operate the average information system. With AI integrating the picture, chances are that our primitive computing model will not fit well with the level of information and integration required to properly leverage AI for the benefits of humanity. Even worst, it will exacerbate its weaknesses and accelerate the obsolescence of our primitive and naive computing approach.

No indirect innovation can reduce our growing application complexity as we solve larger problems, involving more people, more organizations. Just imagine the chaotic reality of a single human multiplied by millions, with dozens of organizations realities, each one overlapping the other in a quest for survival, performance and financial success. The prerequisites enabling our prebuilt application models are exponentially expensive to create as complexity grows exponentially with each colliding reality bubble interacting with it. Our assumptions are valid for less times than it take to create the system. That’s a recipe for failure.

We need to let chaos enter our information systems. We’ve reached a point where software must stop acting like a discrete product and emerge as a fluid and dynamic organism, with humans not treated as external users but as 1st-class building blocks, working on a peer-level with AI and other system components. Software need to stop being constrained inside a computer, as envisionned by hardware engineers and start to flow between the various actors, humans, systems, objects and AI spanning specialized computing resources. Letting chaos in our information systems doesn’t means that everything will be random and disorganized. It just means that software must be able to evolve with minimal developer’s intervention, that interaction patterns may emerge based on their usefulness, to later be replaced by more adapted ones. It means stopping to try to create a snapshot of the reality for dozens of roles and organizations and give each actor the tools they need to morph their own part of the information system to support what they need today, not having to wait for a soviet-like central planning IT department for their priority to emerge on next year budget. We need to stop trying to simulate whole industries in a simplified roleplaying game and reverse our thinking to empower each actor to participate in their ecosystems, pushing and pulling content for their current needs, giving them tools to create their own information system perspective, just like they creatively use emails, chat and low-semantic office tools today to compensate for their corporate system misalignments and shortcomings.

Future of computing

With AI agents gradually offering decent alternatives to humans for various roles in our society, we quickly need to reinvent the role of humans in our information systems. Humans are at a disadvantage today being fed mostly final visual data, without authenticity and having to constantly prove their identity and perform mental integrations of various semantically different realities (one per application vendor) to support their day to day activities. We need to evolve from computing cavemen to computing creators, each of us becoming our own information system and interacting with AI on the same level. AI is not smarter or more creative then humans (yet?), it’s just that we’ve placed humans at a disadvantages because of our obsolete hardware-centric computing model casting humans as external users.

With the dire challenges ahead of us, we need to stop wasting energy on improving a flawed computing model, wasting the enthousiasm and energy of a new generation of bright young developers on trying to stretch an obsolete offline software paradigm to solve modern highly complex communication and collaboration problems. If you look carefully, you’ll see the cracks in each and every system you work with. Only the richest corporations can invest in building software from scratch, everyone else falling back to generic solutions, which turned up to increase the ambient chaos, costing way much than their licensing cost.

Let’s be bold and change the way we think about software. This is my goal, not only to identify issues, but to propose principles and solutions to fuel a new movement, an online dynamic software paradigm, on which we can build our future, not just the future of computing, but maybe the future of humanity.

--

--

Joel Grenon
Rethink Software

Software has been my passion for 40 years. Working on a new computing paradigm, merging chaos, AI to empower humans to be more than simple users.