Ice Floe Water — Pixabay

How organisations heat die…

Entropy, Management and Waste in complex organisations

David Williams

--

Think about two fictional UK road haulage organisations.

And let’s imagine that both have same number of lorries. Both use identical lorry models, so have the same load capacity, speed, fuel efficiency etc. Both have the same customer base. Both have pools of drivers and loaders with identical skillsets, initially distributed in an identical way across the country.

Haulier 1: KnowledgeCo

The firm has:

  1. complete knowledge of the geographic location of each of its lorries. This data is collected once per second by precise positioning technologies — which also means they have a practically complete knowledge of the speeds of those lorries and a reasonable approximation to their acceleration and braking behaviour.
  2. complete cargo manifests which define exactly the end destination of each item on the back of each lorry, how that groups up into individual palettes, the corresponding weight and dimensions of those palettes, and any special conditions for transport (maybe the cargo needs to remain upright, or kept within highly specific temperature, pressure or humidity tolerances). Those manifests also specify the person requesting the movement and the request date, the dispatcher and the dispatch date, the receiver (defined as the one receiving the consignment) and any receipt deadlines, delivery instructions for the handler, the customer (defined as the one paying the money) and the date of purchase of the service, the relevant tax and customs jurisdictions applicable to the cargo along with details of any corresponding duties or regulations. Those manifests also contain information on the historic and future delivery path of the cargo — i.e. the previous and future shippers, previous and future warehouses (along with precise location information for the warehouse bay, the height and length positioning on the relevant shelves and storage instructions), details of the past and future haulage firms used in the source countries (or any onward leg of the delivery) and information on whether the palettes need to be unpacked or unbundled at any one of these stages. The manifest information is updated in real time at the point when cargo is either loaded or unloaded onto any KnowledgeCo vehicle or distribution centre.
  3. complete knowledge of the status of each individual vehicle; recording both the mileage of the vehicle as well as a series of measurements of key properties of critical items can be known (e.g. the amount of time tyres have been on the vehicle, the amount of wear on the tyres from their ‘brand new’ tread, the engine temperature and oil levels, the state of the gearbox, the condition of the battery, the condition of storage capabilities like refrigeration, as well as the status of the monitoring equipment itself). This enables the firm to establish and control contractual arrangements with its engineering maintenance provider FixCo — which, with an accurate knowledge of the engineering state of its vehicles, enables a relatively accurate forecast of the workload coming from KnowledgeCo as a customer. KnowledgeCo requires FixCo to be open and precise about the time taken to conduct maintenance, repair and overhaul activities — but this is straightforward; firstly because the data gathered is highly granular and so the lorries ‘know’ how worn certain parts are before they arrive at the workshop, and secondly because of the impeccable maintenance record which sets out the complete history of work done on each vehicle with full traceability to the parts used in each activity and the corresponding mechanic who did the job. Similarly, the granular knowledge of the consumption of petrol, lubricants and spares enables the firm to negotiate deals with its commodity suppliers (PartsCo, DieselCo and LubeCo) with precision both in the quantities, delivery locations and delivery timings. Naturally, KnowledgeCo knows how much of each is currently stored at each depot location across the country; data it collects hourly and in relation to the specific registration of the specific vehicle using the supplies, so that a rich understanding of the pattern of consumption can be discerned, plus this makes it easy to trace defect parts, oils and fuels right down to the supplier shipment. As KnowledgeCo happens to be the distributer of its own parts, fuels and lubricants, it turns out that it knows the origin, supply route and supply conditions of every single inbound shipment.
  4. complete knowledge of its drivers. This includes things from the perspective of the driver (pay, benefits, specialised requirements, shift patterns, overtime) but also information on whereabouts and position in the shift to enable the firm to know whether a driver is able to complete a delivery route within their set of working hours. Individual drivers are matched to vehicles so that the driving behaviour (from the data collected through movement) and the vehicle performance are possible to link to individual drivers so that risky or ineffective driving patterns can be discerned allowing KnowledgeCo to target improvements in driver training or rostering. Drivers themselves have access to information captured every second on the flow and density of traffic on the road network, as well as intelligence on routes that have navigation challenges calibrated to the dimensions of the vehicles (such as low bridges, weight restrictions, tricky turning circles or other restrictions).
  5. the ability to bring together the understanding of its shipments, routing, customers, suppliers (and corresponding contracts) and staff so that its management can make decisions about how to adjust its operational business capacity with respect to both predictable and unpredictable business conditions, and this information is built upon the latest positions, timings and statuses of the resources within its business. This allows crises (e.g. unforeseen road closures, sudden urgent changes in the customer’s requested destination, traffic accidents, diesel shortages, foreign transport worker strikes, foot and mouth disease, the bankruptcy of LubeCo, the bankruptcy of a major client), whilst painful, to be bounded — making their resolution tractable. KnowledgeCo keeps data on these crises so that the risks of them happening in the future have corresponding determinate plans, measured amounts of redundancy can be established, trade-offs of ‘own vs. hire’ can be defined, and focused insurance policies bought. It also allows investments to be made in a laser-focused manner based upon up-to-date information on the condition and availability of assets and staff.

Note that KnowledgeCo does not have the capability (claimed by distant competitor BullshitCo) which can allegedly forecast future traffic trends, future economic trading and movement patterns, future technology trends in logistics, vehicle engineering breakthroughs and other things that KnowledgeCo have assigned into the category of unpredictable business conditions. As a firm it is not too worried by the lack of this capability — as the firm is clear to distinguish between knowledge as opposed to guesswork, and also as BullshitCo are not doing terribly well.

Haulier 2: EntropyCo

This firm:

  1. has precise knowledge about where its lorries are… but only when they arrive or park up at company depots. As there’s no information about where the vehicles are the rest of the time when conducting deliveries, those lorries can be pretty much anywhere as far as EntropyCo can prove. As a result, the firm cannot be sure when the lorries can be expected at the depots — their arrival feels random — so they have to maintain a sizeable depot footprint in case (as has been known to happen) all of their vehicles turn up all at the same time. They’re unsighted on their travelling speeds, so the durations for driving certain routes is based purely on the past experiences of drivers.
  2. loads cargo as quickly as possible when they arrive at pick up points. The state of consignments are known at the point where they are loaded, however loaders are pointed at those piles of goods who promptly pack waiting vehicles according to the destinations told to them by the driver. Loaders work out amongst themselves how best to fit the cargo into waiting vehicles — working principally on a ‘first in, first on’ basis. The assumption is that all of the cargo for the particular set of destinations that the driver is going to on their delivery route is loaded from the waiting consignment. Drivers assume that loaders identify goods requiring particular storage conditions and load/orientate those goods correspondingly. EntropyCo drivers congratulate themselves on the fast turnaround of their lorry-loading. Drivers observe that the long time spent unloading the vehicles at the destination, the occasional damage caused by incorrect storage conditions and the delivery errors, are not their problem.
  3. waits for vehicles to break down before performing repair activities. Maintenance activities (replacing fan belts, changing oil, tyre substitutions, furry dice cleaning) are performed by EntropyCo when mechanics happen upon problems during random spot checks of vehicles. Because these activities are somewhat random, EntropyCo is sure to hold substantial inventory holdings of spare parts — just so that they aren’t caught short, and to cover the fact that occasionally shipments of parts can be defective. However, most of the time, vehicles tend to break down roadside. This means that EntropyCo relies heavily on the nationwide call-out contract that it has with FixCo to perform roadside recovery, identification of the repair and then performance of the repair across its country-wide workshop network. They pay a fixed annual premium for this service given that these breakdowns feel inherently random across the fleet.
  4. has exactly the right information on its workforce to be able to pay them (never to be taken for granted). Shift patterns are often out of sync with where lorries are across the network — which is not surprising given that vehicle arrivals are unpredictable. This sometimes means that vehicles that are ready to go aren’t always operational as they are awaiting drivers. Without any information to the contrary, EntropyCo assumes that all of its drivers are roughly of the same driving standard; and as such gives blanket training packages. Drivers are equipped with atlases that enable them to plan routes. Oftentimes they run into unforeseen traffic; some of the savvy drivers go down alternative routes (also sometimes overrun with traffic) but most wait it out.
  5. has limited capital as borrowing costs are very high and there is limited equity capital. Crises occur often in this industry, and as such precious capital needs to spent reacting to stockpiling diesel, treating lorries that are at risk of having gone near foot and mouth fields, finding a replacement for bankruptcy hit LubeCo (“Who knew we bought their lube?” the Chief Procurement Officer was overhead saying). If and when the firm does invest, it tends to do so in additional capacity (when it runs out of extant capacity). At least it’s known that they will be useful. As such it finds it has a lot of lorries. Sometimes all the lorries are used, but sometimes there’s just no free lorries. Same for the depots. Technology companies and consultants sometimes speak to management advising on investing in computer solutions to improve data quality and business decision making, but it’s not clear how this impacts the bottom line — the MS Excel cost benefit models seem to show really long payback periods and poor NPVs. However, they are interested in a proposition from distant competitor BullshitCo who according to their VP of Sales has the ability to predict future traffic flows and the next key economic trends…

To anyone familiar with the brutal logistics and haulage industry, EntropyCo is only likely to feature as a viable business shifting goods around the bases of the continent of Antarctica. Even then that’s debatable. The question is why? The answer is obvious: the management is poor. So why? That answer is semi-obvious from the spoiler in the caricature — the prevalence of entropy; and/or the presence of knowledge — the two are inverse to each other. Entropy determines the extent to which activity in the organisation is ‘waste heat’. It’s semi-obvious because entropy/knowledge is only semi the story. The knowledge needs a context or purpose; otherwise it is indistinguishable from randomness. In the case of KnowledgeCo and EntropyCo the tacit goals are:

  • survival
  • being a “Co” they’re interested in making some money; and,
  • they are interested in making money beyond today. i.e. they want to make future money. This last is arguably emergent from a) and b) but worth reinforcing.

Reducing entropy and achieving goals are the universal twin purposes of management in any organisation.

Goals are the easy bit. Anyone can come up with goals. Survival and profit are boringly obvious. Organisations have targets and ‘vision statements’ up the yin-yang. Entropy, on the other hand, is no joke. This phenomenon — that every manager has no choice but to combat — is actually, technically, literally and scientifically the phenomenon that is going to annihilate the universe.

The Universe will likely heat-die. It sort of won’t look like this. Thank entropy. Image by Gerd Altmann

What is entropy?

The term has baggage. Jon Von Neumann (the scientist who applied entropy to quantum mechanics, and the inventor of game theory) advised Claude Shannon (the scientist who applied entropy to information) to call the central core concept in his breakthrough studies in information theory ‘entropy’ (we will come to this concept later). The non-main reason was because of the mathematical affinity with the corresponding and better understood phenomenon in thermodynamics. The main reason was because (in his words) “nobody really knows what entropy is; and as such, you’ll always win the debate”. Shannon took his advice. It so happens that what is now called ‘Shannon entropy’ in information science formalised the most precise definition commonly accepted.

Since then the term has had a rougher ride. It is frequently misrepresented, misunderstood and misapplied. Well-meaning individuals have been attempting to describe entropy within stuff ever since Shannon, often questionably. In particular there have been a multitude of attempts to apply the concept in the management literature — some more successfully than others. In other instances the term is frequently used as a medium of intellectual showboating; often at best used when terms like waste, uncertainty or randomness would suffice, at worst entirely misrepresenting the concept. As such, the reader is forgiven for rolling the eyes seeing the term in this essay… For lack of doubt, entropy is not the following: waste, energy, uncertainty, opacity.

Entropy is a tricky concept to grasp fully. Avoid people who claim otherwise. So here are a few definitions:

  • A measure of ‘disorder’ in a system
  • A measure of the unpredictability in a system
  • A measure of how energy is dispersed across a system
  • A measure of the energy unavailable for meaningful work
  • The number of possible micro-states of a system that are could correlate with an observed macro-state of the same system
  • A measure of a system’s inexorable tendency to move from a less to a more probable state
  • A measure of the extent to which the component parts within a system are arranged in a way that is distinguishable from random (i.e. high entropy = indistinguishable from random, low entropy = very obviously non-random)
  • A measure of the amount of information that is missing in order to be able to fully specify the state of the components of a system given the state of the system as whole (this is Shannon’s definition).

The common thread is that entropy is a property of the ensemble not of an element or constituent. Each individual constituent might have some uncertainty related to it, but it cannot outwith the collective have entropy. Furthermore, any system that is made up of component parts which individually have some degrees of freedom (i.e. can move around in some sense) has this property — even if its value is zero. For ensembles that have zero entropy, nothing is unknown, nothing is random, everything is certain, and surprise is impossible.

Let us look at entropy in greater depth and in how we think about information. Skip if you know this.

Entropy in thermodynamics

In the 19th century, steam power was big business, so reducing coal consumption was key to better profitability. Thermodynamics begat entropy as 19th century engineers were scratching their heads trying to understand this problem. Engineers knew they worked, but without the science they did not really know how to improve them, and as such they were desperately inefficient; typically only 3% of the energy generated by the furnace was converted into useful power.

French mathematician Lazare Carnot figured out that energy loss when moving stuff was “a thing”, usually as heat through friction. His son, engineer Sadi Carnot, figured out that engine power only depended on the transport of heat from hot to cold, but that a (big) bit of that heat would be conducted away by the casing. The German physicist Rudolf Clausius coined the word entropy to describe the slug of energy (in the steam engine scenario the 97%) that was unusable in both cases. Finally, Ludwig Boltzmann had the insight (and developed the maths) that there are only a few meaningful macro-level states of the engine: its pressure, temperature, volume and energy; that could correspond to many quintillion micro-states: the individual positions and velocities of steam molecules inside that system. The more micro-states corresponding to the same macro-state, the greater the entropy (see simple dice example below). There are only a few ways to have a hot corner of a room. There are many ways for that heat to be dispersed across the room. The more ways, the more entropy.

Using dice combinations to understand macro and micro states. Rolling 7 is the most probable so entropy is highest in that state (i.e. we have the least information about the components rolling a 7 and the most when rolling a 2 or a 12). http://i.stack.imgur.com/

This is the crucial point: in a thermodynamic system, pressure, density, and temperature tend to even out over time because that equilibrium state has higher probability (more possible combinations of microstates) following the passage of time than other states, and often by orders of magnitude (i.e. millions, billions, often many quadrillions of times more probable). At that point, heat is perfectly dispersed, the arrangement of particles is indistinguishable from random and entropy has won. Boltzmann proved that you get entropy because a) squillions of things [molecules] b) a squillion times a squillion interactions and c) statistics.

Entropy in Information

Take a means of encryption. A code you do not know. Maybe an alien code. One that has been comprehensively mathematically encrypted by using un-guessably high prime numbers. Quantum computers and all that. Your mission is to guess the second letter of a word. Assume the language of a word is English, and assume it is not a proper noun. How much information do you need given the first letter is ‘q’?

The answer of course is none whatsoever. This is an example of zero entropy in information. For that reason morse code operatives would have no need to send the extra couple of beeps required to transmit the following “u” — the information was unnecessary. To put a different way, one experiences zero surprise that the letter after ‘q’ is ‘u’. In a message, one would experience a lot more surprise if one learned the letter after say ‘t’ was ‘u’, after which the number of options quickly narrows down. Fast forward to the modern world and one can reasonably guess whole words from 4–5 characters. This is the principle of predictive text software, which means you can figure out the entropy of abstract things like the English language. It is also the principle of data compression in computing, which brings us back to Shannon who first noticed this and described it mathematically. The basic principle is that it is not possible to be surprised by the lottery numbers of last week. Once an event has become certain, it can yield no more new information.

The maths is identical to Boltzmann (actually that the definition of thermodynamic entropy is actually an applied special case of information entropy). This leads to another crucial observation. Entropy is a macro-property of any system that contains unknown properties describable by a probability distribution of any sort of underlying micro-properties. In all cases, as time unfolds and as a stochastic (i.e. random & time based) process plays out, the macro properties of that system will be drawn inexorably toward the most likely distribution of those microstates, and that typically, that distribution will be increasingly indistinguishable from random to a neutral observer.

It is worth knowing that entropy — because its derivation is statistically derived — is physically and intellectually devastating. It is an unavoidable property of things/systems defined as being comprised of other (smaller) things. This brings us back to businesses and organisations, which are collections of resources that have highly specific inter-relationships in space, time, control, sequencing and dependency.

Organisations and defining management

It is tautological to remark that organisations are aggregates of smaller pieces. In theory, those pieces could all be numbered, put into a metaphorical bag and then jumbled up. The parts are now essentially random, and from a systems perspective the organisation is destroyed. It is at maximum entropy once randomised, it also ceases to have any “organisation” in the descriptive sense of the word. To recover the organisation, each piece needs to be taken out and orientated in the right way with respect to every other piece comprising the organisation. Else it is not organised properly. It should be straightforward to see that this has a relationship to the information required to describe that state. That information describes what those parts are, what they are doing and what their relationships with other parts are. As things are not static, new information is required periodically, producing a time series.

Management is the business of collecting periodic information on that being managed, processing it, and performing actions correspondingly so that the organisation embodies a configuration amongst its parts that allow it to optimally and perform meaningful functions/accomplish goals in a sustainable manner and survive in its environment.

Think about total sales, net profit, project deadlines. These are attenuations of information and simplifications of real life. If this did not happen, management would be no better able to discriminate than the shop floor. Practically, it would be overwhelmed when it came to decisions. The key is to ensure that there are as few possible variations (or variety) in the way the business could possibly be configured in real life to fit with those management summaries. The more variety, then the more organisational resources and interrelationships may be in random configurations. The more variety therefore, the less likely the configuration of your organisation will hit your stated goals. Lack of knowledge about system configuration is directly proportional to lack of confidence on goal achievement. The rule is: be low entropy.

The tidy organisation

If the variations are few, then I can have confidence that my organisation is where I think it is. My decisions will be relevant. There will be little uncommunicated relevant information. Here is a fuller list of features of a low entropy organisation:

  • The status of resources (staff, wares, equipment, sites) are continuously monitored and location/time stamped. “Continuously” in this context means measurements taken at a sufficient tempo to make correspondent decisions. Daily is fine for a forestry business but not for trucking. Per second is fine for trucking but not Formula 1. Very few businesses require sub-microsecond resolution (maybe the LHC).
  • External transactions (payments, orders, complaints, queries) are continuously monitored and location/time/quality stamped.
  • Internal transactions (staff & budget allocations, process hand-offs, audits, MI) are continuously monitored and location/time/quality stamped. “Quality” means measurable features that correspond with the ability of a resource to fulfil (and sustain fulfilment) of a core function/requirement — that is to say an engineering, not a subjective definition.
  • Combinations of resources into higher level capabilities are understood and the effect/availability of those capabilities are understood.
  • Relationships amongst resources, capabilities and transactions are known and are updated immediately if changed.
  • Planning assumptions are explicit and the conditions for their update to data and/or a better assumption is articulated.
  • Data corresponding to all of the points above exist in an authoritative, accurate and secure repository, almost always an information technology (and almost never a human).

Note that the list above is necessary and sufficient for reasonable fidelity computer simulation should managers wish to understand hypothetical scenarios and perform ‘what-if’ analysis. Low entropy organisations can go a step further if it engages in continuous comparison of its model (that is the historic data, the relationships and the assumptions) with its next batch of observed data. No model can be perfect, and occasionally new real data will dumbfound the expected values even when corrected for measurement error. This is the opportunity for low entropy learning, which is the highest value organisational learning available as it forces reflection, incorporates parallax (this is a fascinating concept, worth looking at here, scroll to 00:27:00) and leads to better interpretations of the environment, and so better decisions.

Organisational waste heat

However, where my business has a high entropy, there are lots of ways my business could be configured that are consistent with the information I’m receiving as a manager. As such, there could be significant buried variation. There could be all sorts of un-transmitted information. Risks will exist that are indeterminate. Management could be surprised. More importantly, the amount of uncertainty will tend to compound as unknown resources in unknown places influence connected unknown downstream resources in unknown ways and to unknown extents. At the limit, where uncertainty dominates, the configuration of the elements in the organisation may as well be random. This is a recipe for significant waste.

Waste heat #1: Search

Search costs are one of the key hidden costs within a high entropy organisation. Information about the organisation — how many resources there are, what they are doing, and who is doing them; who do I have contracts with, and how much have I spent, and did I get good value — has to be surveyed. Precisely because trustworthy records do not exist makes this a difficult task, often requiring literal eyeballing by humans to ascertain the location, timing, activity or status of staff and resources. Knowing these basics are table stakes for meaningful management of any kind, and yet there are organisations that cope with institutionalised uncertainty against these resources, expending significant cost in ‘one off’ exercises to find out the answers when inevitable big muscle movements are required. These search costs are waste heat. They are modern equivalents to the wastes first identified by Carnot and his analysis of the steam engine.

That cost is contributing neither to productive operations today nor to the ability to sustain future productive operations tomorrow. This is money spent because the organisation has failed to platform its resource, capability and transactional information and now needs it; normally to make a critical management decision (e.g. cost cutting or an investment). The survey will be slow (imagine having to redraw a map of London every time you make an unfamiliar trip) and be error strewn given the fact that crude non-digital procedures typically need to precede. It will gawp at the bill to platform the data properly, whilst conducting these poor substitute exercises in multiplicity over and over again.

Waste heat #2: Consensus

Physical search costs however are just part of the pain. Surveys employed to search for organisational data typically have to start off basic and sub-optimal (because they are not systemised) which means they are low resolution, contain errors, or contain surprises to management that are accused of being errors or otherwise misunderstood. Sometimes elements of the survey need to be redone. Collectively these effects impose an additional second order cost as effort is expended and time burned discussing information (that the organisation should know) with key people in the organisation and achieving alignment on what it means. This is consensus waste heat.

Let’s start with the generous case for consensus costs. New data discovered about the organisation may contradict the mental models used by its managers to run the business to date. In difficult business environments/crises, those contradictions are viewed as “missing pieces of the story”, yielding insights that take the organisation forward. Senior and middle management then update their views about how their business is updated. This insight needs to be propagated and typically some form of education/communication needs to happen before the required changes to operational and managerial practice can be fully and faithfully implemented. In that meantime, the organisation continues to run on the ‘old script’ and as a result incur further costs than would have been the case if the remedies could have been adopted upon identification following the discovery of the state of the organisation (search costs).

Being less generous... Managers incentivised by competitive game-theory based reward and comparison mechanisms/career progression may seek to protect the perception of their own accountability for inconvenient or challenging data points. This drives pseudo-debate that are pantomimes for what is really going on: positioning and posturing. At its worst this behaviour metastases into deliberate politics, obfuscation and lies. Why search for information when I can survive (or even thrive) through deliberate avoidance in seeking it out. If I’m lucky, the environment will not find me out. For the exceptionally cynical, individuals ultimately accountable for those pseudo-decisions shore up CVs for lateral and diagonal jumps well before the proverbial hits the fan. By the time accountability is sought, too much time or too much confusion has set in, and without good records how could accountability be enforced in any case?

Compounding exponential waste

In real life the generous and less-than-generous mix together in basically unknowable proportions, leading to second order unpredictability. All this just leads to delay and more cost before meaningful action is taken. Cost is absorbed in a groundhog day of activities such as “stakeholder management” and “change management”. Remember this is for information that the organisation should just know!

The worst-case scenario is where an expensive search is performed, high effort is expended to achieve consensus, and then the work is (through incompetence or politics) discredited or abandoned. To achieve a low entropy position the organisation has to start again from scratch, and this time with the headwind cost of cynicism from those in the organisation who had to watch the initial attempt(s) fail. This sets up a triple whammy of search, consensus and write-off waste that will require at least two more doses of search and consensus to rectify and will take more resources than the first attempt due to reduced confidence by those involved that it will actually turn into anything. This is a vicious cycle that drives the need for better, more accurate information just as the organisation is proving incapable of addressing it, driving cultural cynicism and further hidden complexity.

At this point let me acknowledge the seven muda (“wastes”) established through the Lean manufacturing techniques of Toyota and s uccessfully incorporated by a number of other organisations. These are waste heat from a point of view of production. Also important to acknowledge is the waste expended as a result of mis-determined customer requirements — building ‘the wrong thing’, and the subject of The Lean Startup by Eric Ries which has become the underpinning philosophy for digital firms. This is waste heat from a point of view of product/service innovation. Whilst each comes from a different perspective of certainty regarding the product/environment fit, both approaches assume a non-dysfunctional ability for management to ascertain the state of the business. Waste is bounded and tractable, and assuming a disciplined management approach — eliminatable. Wastes spiralling from the vicious cycle outlined early compound.

Waste species. The bottom right quadrant effectively signals the absence of meaningful management.

Organisational heat death

Organisations die because of high entropy. Organisations with high entropy will be dominated either by search or by lies. The former is doomed to huge quantities of waste heat — as the organisation expends resources searching and re-searching information about itself so that it can make a decision. Usually it expends even more resources then trying to agree that that information is actually accurate and representative. Because its initial discovery was so effortful, invariably it is imperfect, making the information a sitting duck for seasoned organisational game-players. All the while the organisation burns heat not doing the thing it is supposedly there to do.

High and Low Entropy Organisations

KnowledgeCo is intuitively and obviously a better run, more competitive and more sustainable business than EntropyCo. Yet time and again executives in real life avoid, delay or misunderstand the investments (principally in information technology, data exploitation and process re-engineering) to achieve a comparable operational profile. Problems attributable to high entropy are misdiagnosed as poor operational performance or lack of digitisation. Furthermore, achieving a low entropy state unavoidably requires at least one, usually painful, cycle of search and consensus before the high information state can be platformed. No amount of sole investment in (for example) digital tech can ‘teleport’ the organisation out of entropy.

It is better to classify organisations as high or low entropy — rather than high or low knowledge, its inverse equivalent. That is mostly to avoid misunderstandings. Engineering firms, medical institutions or law firms must obviously be high knowledge organisations in the sense that the individuals concerned need to have a high amount of knowledge (in contrast with say a building labouring firm — no offence to building labourers, a noble vocation). The irony is that those firms with high individual knowledge are likelier to be higher entropy organisations.

Low entropy organisations are either:

  • Platformed: That is, the information for the resources in the business are contained in information technology that is readily exploitable and kept up to date with reliable, fast, accurate processes; or
  • Simple: The organisation is fundamentally not capable of adopting that many different variations.

Amazon is the par excellence example of the former. Amazon simply cannot not know where all its moving parts are, down to the second. It’s business would fail. It’s low entropy is likely a side effect, but is a formidable competitive advantage. Aldi is a good example of the latter in relative terms. Aldi has a variety of product lines an order of magnitude lower than the traditional supermarkets, so whilst it is a complex business compared to a family shop, it is substantively simpler than many of its competitors. Note that fundamentally lower entropy is the explanatory force for sustainable disruption.

It is not necessary to be convinced whether or not entropy applies to organisations — it is not up to individuals after all (least of all organisational leaders). Entropy applies by definition. The environment and time are the eventual judges. Respecting entropy, and seeking interventions to tame it sufficiently and export it is axiomatic of all systems that can sustain themselves. Organisations that suffer from a prevalence of entropy will find that they have to expend resources (that’s money folks) de-randomising. Or they will find themselves dead (in the private sector) or crippled (in the public sector).

Entropy is an unfamiliar way to describe organisations. In the next post, I will sketch out how entropy can permeate well ordered organisations and the linkage to decision and measurement tempo (OODA loop).

--

--

David Williams
David Williams

Written by David Williams

Strategic Design. Digital Twins. Operating Models. Across Business, Government and the wider Economy. Views are my own.