The smarter your system is, the dumber your users can be. The dumber your system is, the smarter your users must be.
So as the intelligence of a system increases, the intelligence of its users may decrease by an inverse proportion.
And although this sounds like an insult to human intelligence, it is actually a compliment. We have already established that being stupid and lazy is a luxury that users are willing to pay for. Our mission is to empower them to be stupider and lazier about the things that don’t matter, so that they can be smarter and more proactive about the things that do.
So, when a system increases in intelligence, it allows for an intelligence migration. The intelligence of its users does not actually decrease, they are finally free to reallocate their intelligence elsewhere. So intelligence resources are liberated.
This can be imagined as increasing intelligence liquidity, because as human intelligence units are liberated by system intelligence increases, these units can be shifted anywhere; and if the market for intelligence is rational, presumably those intelligence units go where they are most needed.
This begs a question:
What economic law governs our decision to build systems vs. use systems? If for every hour we invest in building a system, we get more than an hour back in leverage, a rational actor would continue to invest in building without ever pausing to use. But this would be absurd, like a farmer who never eats. And of course, I doubt we have a unit of measurement for the investment value of every additional hour. Has an economist explored this yet?
Let’s assume that I can generate 100 intelligence units per hour — that is my maximum intelligence limit. Let’s assume that System X generates 1 intelligence unit per hour used. Let’s also assume that for every hour I invest in upgrading System X, I spend 100 intelligence units, and increase its output by 1 intelligence unit per hour used.
So, if I invest 100 hours in upgrading System X, I have spent 10,000 intelligence units, but its intelligence output is now 101 units per hour. I am still behind, because to operate System X requires a user, and if I am the only user available, I am only generating 1 unit of additional leverage. So if I spent another 100 hours using System X instead of investing in it, I would generate 10,100 intelligence units.
But, let’s assume I can hire other users only capable of generating 25 intelligence unit per hour. And let’s assume thatI can pay them in the currency intelligence units, and that they will accept the rate of 50 intelligence units per hour. Now my system can generate a profit of 51 intelligence units per hour, and if it can accept infinite users, it can generate (51 x an infinite number of users) profit per hour.
What is this model missing? The first thing to question is the value of my maximum intelligence limit. Such a limit is irrelevant if all the channels for me to invest or use my intelligence cannot absorb that capacity. For example, what is the relationship between my intelligence output and my ability to upgrade System X? If I was half as intelligent, or twice as intelligent, would that affect my ability to upgrade System X?
Also, what are the alternative activities to upgrading System X? Are there other systems to upgrade, like System Y or System Z? Those would represent my “build” choices, that is, my investment options. But what about my “use” choices? If, instead of building systems, I want to just operate them, what are my options? Perhaps without the use of any systems, I can turn 100 intelligence units into only 1 value unit per hour — like a hunter-gatherer. And with the use of primitive systems, I can turn 100 intelligence units into 10 value units. And in a modern society, there are various jobs which give me varying degrees of leverage, and let me turn 100 intelligence units into anywhere from 150 to 1000 intelligence units per hour, but only pay me a fraction of that, while retaining the rest as profit. These usage options represent my true opportunity cost for investing in systems versus using them.
There is also the question of how to model the linearity of output. For every hour I use a system, do I get the same output? For every hour I invest in upgrading a system, do I get the same output? Presumably, not. Investment output can certainly be assumed to be more volatile than usage output, and both are not perfectly linear.
Reality is even more complex, as there is a marketplace of individual actors making these decisions, where all systems are owned by corporate actors, which are all being upgraded, and there are no usage options that are not owned (to steal the phrase from Marx, the means of production are monopolized).
And this is the point in which I must stop. Because there are diminishing returns for me (specifically, me, Francis), to invest in this thought experiment. I hope someone else has a different value equation and can continue the equation.
It is time for me, rather, to switch from investment mode to usage mode, and to extract value from the investment which I’ve made. The primary value here is the insight that systems are like batteries, in which I can store up my intelligence today, and draw upon them tomorrow. Of course, this analogy is imperfect, because powerful systems can actually generate more intelligence than I could ever generate as a user. But even a primitive system can be like a battery, and even an inefficient battery can be very valuable, because you never know when you’ll need extra intelligence points in the future. So the body of all human thought is like a giant storage device for intelligence units, which can be drawn upon in the future.