Aneel
Aneel
Jun 15, 2015 · 4 min read

Originally published here.

Our model of the world is wrong.

It’s true in software, distributed systems, organizations.. everything.

Reading primary sources, or learning how/why a thing was made, is essential to understanding the conditions that held and knowing bounding scales beyond which something may become unsafe.

It began to knit together around OODA:

  • ooda x cloud — positing how OODA relates to operating models
  • change the game — the difference between O — A and -OD- and what we can achieve
  • pacing — the problem with tunneling on “fast” as a uniform good
  • deliver better — the real benefit of being faster at the right things
  • ooda redux — bringing it all together

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness — Taleb’s Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence.

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we’d want/have to change. Change hurts.

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary — even about how we ourselves are — is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

  • Black swans are precisely those events that lie outside our models
  • Data that proves the model wrong is more important than data that proves it right
  • Black swans are inevitable, because models are, at best, approximations

Disconfirmatory evidence is more important than confirmatory evidence

Antifragility is possible, to some scale. But I don’t believe models can be made antifragile. Systems, however, can.

  • Models that do not change when the thing modeled (turtles all the way down) change become less approximate approximations
  • Models can be made robust [to some scale] through adaptive mechanisms [or, learning]
  • Systems can be antifragile [to some scale] through constant stress, breakage, refactoring, rebuilding, adaptation and evolution — chaos army + the system-evolution mechanism that is an army of brains iterating on the construction and operation of a system

The way we structure our world is by building models on models. All tables are of shape x and all objects y made to go on tables rely on x being the shape of tables. Some change in x can destroy the property of can-rest-on-table for all y in an instant.

  • Higher level models assume lower level models
  • Invalidation of a lower level model might invalidate the entire chain of downstream (higher level) models — higher level models can experience catastrophic failures that are unforeseen
  • Every model is subject to invalidation at the boundaries of a specific scale [proportional to its level of abstraction or below]

Even models that are accurate in one context or a particular scale become invalid or risky in a different context or scale. What is certain for this minute may not be certain for this year. What is certain for this year may not be certain for this minute. It’s turtles all the way down. If there are enough turtles that we can’t grasp the entire depth of our models, we have been fragilized and are [over]exposed to black swans.

This suggests that we should resist abstractions. Only use them when necessary, and remove [layers of] them whenever possible.

Resist abstraction

Rather than relying on models as sources of truth, we should rely on principles or systems of behavior like giving more weight to disconfirmatory evidence and actively seeking model invalidation.

OODA, like grasping and unlocking affordances, is a process of continuous checking and evaluation of the model of the world with the experience of the world. And seeking invalidation is getting to the faults before the faults are exploited [or blow up].

Actively seek model invalidation

Bringing it all back around to code — I posit that the value of making as many things programmable as possible is the effect on scales.

  • Observation can be instrumented > scaled beyond human capacity
  • Action can be automated > scaled beyond human capacity
  • Orientation and decision can be short-circuited [for known models] > scaled beyond human capacity
  • Time can be reallocated to orienting and deciding in novel contexts > scaling to human capacity

That last part is what matters. We are the best, amongst our many technologies, at understanding and successfully adapting to novel contexts. So we should be optimizing for making sure that’s where our time is spent when necessary.

Scale problems to human capacity.

The whole thing is a bit of a rambling run on mess. But it’s a start. The problems are all psycho-epistemological. Which means they’re all manageable. :)

A starter reading list:

Aneel

Written by

Aneel

Freelance marketing exec | go to market therapist for CEOs