Engineered Inefficiency

I was on the NYSE floor recently for a reception — which always makes me wonder, do people throw parties in other highly technical spaces like, say, the air traffic tower at JFK, or the control room of a nuclear power plant? — and I was talking to a NYSE functionary about the role of the specialists in the era of automated trading.

I suppose it’s obvious to anyone working in finance, but I just hadn’t thought about the fact that the role of the specialist is to introduce inefficiency when it’s beneficial to do so. When trading gets too volatile, they essentially distort market signals by buying or selling their security on their own account, deliberately creating friction in an otherwise frictionless market. Further, when to intervene in this way is left to their judgment, the arbitrariness of their intervention being part of the obfuscation or inefficiency that they introduce.

Since then I’ve been trying to think of other instances of engineered inefficiency. One comes to mind that is very topical, the Electoral College, the original intent of which was to add a layer of mediation between the popular vote and the outcome. Their function was similar to the NYSE specialists, to inject some supervisory judgment, or for better or worse, into a process that otherwise seemed uncontrolled.

In both cases, a human choke point is introduced to control a process that in spite of being the aggregation of many acts of human judgment, is effectively inhuman (in the sense that some consider language to be inhuman). We quantize a mass effect in an attempt to bring it down to human scale, more so that we can comprehend and regulate it than to improve its function. This is like the role of the driver/passenger in an autonomous vehicle, which of course is still to be worked out — a generally passive presence who nonetheless provides a reassurance that driving is still a human process, assisted but still under our control, and if nothing else, an object for assigning responsibility to, even it that responsibility is in the next moment deflected.

It may be that our role in an increasingly machine-managed world is to be a source of exactly human scale inertia, to unpredictably slow down or disrupt automated processes.

What are other uses of friction, indirection, obfuscation, or inertia in systems? What systems could benefit in some way from the injection of human-powered inertia? Maybe the Internet itself, and the process of how information is disseminated across it, is an example.

Take fake news. Most fake news is easily discredited. But the speed of viral propagation is such that even rapidly refuted stories can reach millions before being disproved. And once established, a story’s influence cannot actually be erased. Lacking other ways to authenticate information on the web, we react to the frequency and multiplicity of sources from which a story is reported, with positive or negative valence being almost immaterial. We establish truth through quorum, so once a story has spread, it’s influence cannot be undone even if disproved.

Has our market for information become too efficient, and subject to the storms that affect capital markets? Could we inject more human-mediated inertia into the dissemination of information on the web? Would that make for a more human-scale, more human, system?

Please let me know if you think of other instances of engineered inefficiency. Is there a name for this?

Show your support

Clapping shows how much you appreciated Nick Rockwell’s story.