Addison Maille
12 min readFeb 18, 2024

When systems are easy to learn it’s because the 1st principles of the system are very simple. All systems have the same five 1st principles, components, inputs, processes, outputs, and purposes. Read my article Systems Thinking Through 1st Principles if you want to understand more. The game of tic tac toe is very easy to learn because its 1st principles can be understood in a matter of minutes. The components are the board, pieces and rules of the game. The inputs are the choices made by the players, the process is the playing of the game, the output is what happens at the game’s conclusion, and the purpose of the game is a strategic competition between two people. The degree to which we understand any given system is the degree to which we can predict it. While we can’t perfectly predict the input of our opponent, we can predict what the optimal move for us to make is with great ease. And if we know enough about the opposing player, we can easily predict the outcome of the game. The system, aka game of tic tac toe is one that is fully known to us.

The problem that arrives in understanding systems is when it gets too complex for us to predict the output. A complex system is a system whose output we can’t fully predict because one or more 1st principles of that system are too complicated to fully understand. Chess is a game where we completely understand the components of the game board, pieces, and rules. We understand the process of playing chess, the possible outputs of a win, lose, or draw, and why we play it. But the optimal inputs for playing chess, even with AI, continue to be a mystery. There are so many possible inputs that even the best AI playing chess computers can’t say for certainty what the next best move will be in all games of chess. Unlike tic tac toe, chess is truly a complex system.

And then there’s the complexity that happens outside of systems? An avalanche doesn’t have a clear purpose. It can happen to seemingly an entire mountain face or we can get an avalanche of gravel down a three foot embankment. While we can discover the condition under which avalanches can happen, it’s very hard to understand an avalanche’s deeper purpose. It’s not even clear that there is one. The problem of avalanches and many other phenomena posed by reality is that our systems must be able to operate in the midst of this added complexity. Unlike chess, which is an entirely controlled system we created, real world systems must be able to operate in the face of complexity that we often have no control over.

To deal with the added complexity of reality we have had to build in the unpredictable nature of reality into our systems as best we can. Rather than growing exactly the amount of crops needed for that year in a given civilization, people have always sought for ways to keep a reserve of food, should a harvest fail to be productive. Unpredictability isn’t a problem unless our systems can’t handle that unpredictability. This is a concept that we continue to this day. Rather than simply owning a car and a home, we own cars and homes with insurance policies so that should something happen, we are covered. While we have always strived to build systems that are as accurate as possible, when it comes to reality, we’ve always known we need some kind of buffer and/or insurance.

What we really see here is a secret that, until 5 minutes ago, we all understood. The secret to dealing with complexity, to the best extent possible, was not to gamble with it any more than was necessary. When we come across the unpredictable nature of complex systems, our first task is to remove as much risk as we can. The system remains complex, but we aren’t nearly as exposed to the risk that said complexity poses. We do this by removing as many possible risks as possible. What this really meant was simplifying the complexity to the greatest extent possible. This is precisely what 1st principles allow us to do.

In my article Systems Thinking Through 1st Principles, the case is made that 1st principles serve as a method for simplifying otherwise complicated/complex concepts into their foundational principles. It’s a way of simplifying that doesn’t distort an already complex concept. The one limitation of 1st principles is that they can never tell you if an idea will work. They can only tell if it will not. This is one element of 1st principles that isn’t talked about enough. It’s a way of separating the possible ideas from the impossible ideas. This is why 1st principles simplify complexity but don’t remove it.

For example, good learning requires that we apply the concept that we are learning. The application is how we can then attain feedback to see if our understanding of that concept is correct. The application is when we put that idea into action. Since application of an idea is a 1st principle of learning and most of the subject we call history has literally no application at all, then by what means can we say that we learned history outside of memorization? Historical knowledge with no application is about as useless as a $3 bill. And, it’s why so few people care about history and why even fewer people remember very much about it. With no application, there is nothing to make it stick in our minds or verify its accuracy.

So if I see a lesson plan or some other form of learning with no application for the entire session, then I can say with great confidence that nothing they are teaching is likely to stick in any appreciable way. I don’t actually need to know much if anything about what they are teaching. But if it does have an application and I know nothing about the subject, then all I can tell you is that it’s possible that it will succeed. This is the fundamental limitation of 1st principles. 1st principles have no ability to tell us which of the possible ideas are likely to work, only that it’s possible.

To move beyond 1st principles into the minutiae of complex systems requires 1st hand skills or what we also know as experiential learning. We must have real world experience successfully navigating that system. This too is yet another way we can reduce the complexity of the system by reducing the number of possible iterations that we must consider. As complexity in a system increases, the number of functional iterations relative to all possible iterations will exponentially decrease. A functional iteration is an iteration of that system that produces the outputs necessary to fulfill its purpose. The amount of functional iterations as a percentage of the total possible iterations decreases as complexity increases. This is true, despite having more possible iterations. What this means is that blind guessing in complex systems becomes less and less likely to succeed the more complex that system gets.

To deal with this problem, our understanding of this or that complex system must improve. More specifically, our understanding of a given complex system must become nuanced. This is what experiential learning does. As the number of components, inputs, processes, and/or outputs increase, expanding the complexity of the system, they will simultaneously require more and more nuance to accurately understand. What I call the law of fine tuning states the more complexity in a system, the more specialized one or more of the components, inputs, processes, and/or outputs will have to be to produce a functional iteration. The increased need for fine tuning with greater complexity is why more experiential learning is required. Without experiential learning, we won’t know how to narrow down the number of possible iterations to a manageable number.

To play chess at a really high level requires very finely tuned inputs in the forms of moves. This is why novices can’t beat world champions no matter how many games they play. The odds of them making 20 or more finely tuned moves in a row are effectively zero. To spell increasingly large words in the English language requires a more comprehensive understanding of the rules of phonetics (inputs) as well as the vocabulary itself (output). As the words in English get larger, it will be harder to come across a word via a random sequence of letters. To see the English language example in a more visceral form, lets actually see how this plays out in reality.

When looking at the alphabet, there are a total of 676 possible two letter combinations. 676 is the square of 26 which I will write as 26^2. Of those two letter combinations, according to the most updated online English Scrabble dictionaries as of 2023 there are 124 valid two letter words. This means about 18.3% or a little less than 1 in 5 random pairings of any two letters will result in a real word. This means that if we choose any two letters at random, like a lottery, then we would get a real word a little less than 1 in 5 times. For simplicity sake we’ll call this a 1 in 5 chance of getting a word.

Now observe what happens to our chance of successfully spelling at word through random iteration as we increase the complexity from 2 letter combinations to 3, 4, 5… up through 9 letter words.

2 letter combinations, 1 in 5

3 letter combinations, 1 in 17

4 letter combinations, 1 in 114

5 letter combinations, 1 in 948

6 letter combinations, 1 in 13,423

7 letter combinations, 1 in 229,358

8 letter combinations, 1 in 2.6 million

9 letter combinations, 1 in 132.4 million

What we also see is that in the case of small words like dad, we can change one letter to make sad, lad, had, bad, did, dab, dud, and so on. But if you tried to change even one letter in the word uncopyrightable, you will not get a viable word. In fact, you would have to change most of its 15 letters to get a different word. This would be next to impossible to do by chance. Like in chess, you would have to know a great deal about what you are doing. As complexity increases, so does the law of fine tuning. As the need for fine tuning increases, so does the aversion to change unless that change is made with a high level of comprehension. And as the level of complexity increases, that comprehension will need to come from experiential learning. Experience is the only thing we’ve ever found that can provide us with fine tuning. This is the literal consequence of reality and all the minutiae that it brings. Without extensive real world experience, we won’t know all the tiny details one must know in order to navigate complex systems in reality. Even the complex computer models are only possible because of experiential expertise that guided its development. This is in stark contrast with the overwhelmingly abstract learning that is deployed today.

Even if we can only partially predict a highly complex system, that still radically reduces the number of possible iterations/problems that our systems must be able to handle. A system that can handle a billion different possibilities, while sounding impressive, will be far more complex than a system that only has to handle 100 possibilities. This is why it’s far more advantageous to set controlled burns in places that are prone to forest fires than it is to just constantly put out forest fires after they start up. By setting the fires ourselves, it allows us to radically reduce the complex nature of having to put out a massive forest fire. Controlled burns effectively simplified the system of managing forest fires.

This is what all strategies for understanding complex systems do. We are constantly trying to simplify the systems in order to reduce the actual workload required to understand them. And to understand them, we use a combination of 1st principles along with experiential and reality based learning. That’s how we reduce complexity to something far more manageable, such that we can plan our systems around them.

So why do we seem to be losing this fight against complex systems? The short answer is that we are now trying to simplify our systems without the second piece of the puzzle. While everyone is parroting the term 1st principles, the rest of the movers and shakers seem to be trying to remove experiential learning and expertise altogether. C-suite executives have now increasingly distanced themselves from the day to day operations of their companies. Boeing took their corporate headquarters, which used to be right there in Seattle on the same campus as their largest manufacturing plant, and put it in Chicago, more than 1000 miles away. They were intentionally distancing themselves from the experiential expertise that made Boeing such a great company.

Even school districts are starting to do this where superintendents are now moving the offices of the superintendent away from the high school or middle school campus where they used to be located. Academics tend to have less and less experience in the very fields they claim to be experts in. Principals often only need two years of teaching experience before they can become a principal. We now have this desperate need to separate management/administration from the very systems they are supposed to be managing and administering.

As best I can tell, many, if not most of what one might call the modern elite have gotten confused about how one can use 1st principles. They think that 1st principles are all that are needed to choose the right answer, rather than remove wrong answers. Even worse, MBAs are increasingly a group of overly educated people that just regurgitate the same business class principles of reducing costs to increase short term profits for executives and increase the stock value. They really do think that they can make decisions about complex systems without the second half of what humanity has always used to conquer complexity, experience.

More and more large companies, governments, universities, districts, and other large institutions, are making larger and larger changes while clearly having no idea what the Hell they are doing. Nobody sent them the memo about the fragility of complex systems to large and poorly informed changes. We have decided that experience is no longer necessary to manage and predict large complex systems. And if you think AI will save us, think again.

We’ve never found a better system for teaching complexity than experience. It’s why it continues to be the number one quality companies look for in who they hire. The problem with nearly every roll out of AI is that we are increasingly using it to bypass learning of all kinds, rather than enhance it, with chess being among the very few exceptions. Smartphones have made our young people dumber. Social media have set our social skills back to levels that increasingly make young people seem as if they’re autistic, even though they aren’t.

When we look at how humanity has tackled increased complexity, we’ve always done it in the same way. We found a way to increase the totality of human learning with greater specializations, and a greater baseline level of education we gave to people. After Guttenburg’s printing press, literacy went on a precipitous rise across Europe. The Industrial Revolution led to an explosion of new professions and skill sets. And the digital revolution did the same. What happened with each rapid increase in complexity was a rapid increase in human learning to meet the challenges faced by that complexity. And if human learning fell to the point that people could no longer support the complex society they were living in, that society would devolve to a much simpler level that was sustainable. This is all of human history in a nutshell.

Right at the time when human learning is falling off the proverbial cliff, we are creating the most complex systems the world’s ever seen. We are also removing the kinds of jobs that used to teach skills that improved reasoning from the legal profession, writing, computer coding and many others that AI is already beginning to take. We are removing opportunities to learn from job experience and other real world experiences and replacing them with… nothing.

Right as we are ramping up the complexity of the world like never before, we are pumping out dumber students increasingly void of real world skills. The antidote to complexity for all of human history was our superpower of learning. And it’s that same superpower that we appear to be turning our backs on. If we aren’t finding better and more profound ways to use the superpower of learning then what are we? When we talk about challenges, what we really mean is the incredible learning journey that comes from overcoming difficult things. If AI takes up more and more of our jobs and does them better than we ever could, then what great problems will be left for us to solve.

Literally every promise of Utopia that has EVER been made has fizzled out before it could be enacted or it was enacted and it turned into a Dystopia. I have yet to hear a single answer to this question that doesn’t involve blah blah blah, human ingenuity, blah blah blah, and a miracle will happen. The dumbest students coming out of our universities and high schools appear to be the most unhappy. Life span, indebtedness, critical thinking, economic opportunities, global political stability, and many more items are all trending in the wrong direction. I am the father of three young boys which means I desperately want to be wrong about this, but every time I look, the picture gets worse.

Addison Maille

I am a learning enthusiast that is trying to improve humanity’s understanding of how learning works.