Information Asymmetry, Technology, and the “Inscrutability” Fallacy

Alex Tabarrok and Tyler Cowen think that emerging technologies will make Akerlof’s “Market for Lemons” go the way of the dodo. Symmetric information regarding transactions, services, etc will become the way of the world and being scammed in a manner that you cannot detect prior to transaction will become a thing of the past. Ironically, however, this cheery and optimistic assessment comes at a time of profound hysteria over “black boxes” and “the code we can’t control.”

So on one hand, we have Tabarrok and Cowen’s cheery optimism about data analytics, artificial intelligence, sensors, etc leading to unprecedented transparency and symmetric information. On the other, we have a cottage industry of critics convinced of the exact opposite — “algorithms” are inscrutable, dangerous, and out of control and are slowly eroding human control and choice.

So how can we possibly reconcile these two perspectives? If information is symmetric because of technology, those same technologies also must not by definition be so inscrutable, dangerous, and unreliable. This is especially true when we hear algorithms critics talk about the ways in which data systems reinforce pre-existing social problems. The idea that negative social outcomes could result from formal and informal markets in which individuals have all of the information necessary to evaluate decisions relative to some nominal interluctor induces cognitive dissonance.

Alternatively, if technology is so inscrutable, mystical, and out of control why does technology result in the undeniable improvements in consumer decision making and transparency that Tabarrok and Cowen catalog? After all, the transparency inherent in blockchains seems to be the opposite of inscrutable. And while a few individuals that give an Uber driver bad ratings could be wrong, is everyone rated him wrong?

First, it is unclear at best whether asymmetric information will be reduced across the board because it isn’t clear that asymmetric information is just a technology isssue. Second, humans (and institutions) hate to be measured and judged. Additionally, it is impossible to say whether technology is helping us make better decisions without first acknowledging that technology itself may not be the most important aspect of how we make decisions. Lastly, a good decision may vary among the preferences of the person analyzing the decision — sometimes in dramatic ways..

Readers that want to see some earlier takes on this, click here and here for posts from my old blog.

It’s not really apparent that the problem is technological in nature

A key example of the first issue is the problem of noisy signals. If principals had the means to surveill and discipline agents perfectly, principal-agent problems would not exist in the first place. Noisy signals are a recurring aspect of modern social life and information technology isn’t a panacea for them. In fact, one might observe that artificial intelligence (a technology Tabarrok and Cowen often cite as an mechanism for making social interactions symmetric) can be fooled in ways that we haven’t quite begun to grapple with yet.

If you understand everything needed to objectively measure some kind of economic transaction — why don’t just automate it altogether? Perhaps, given the widespread predictions of autonomous, self-driving cars, Tabarrok and Cowen’s usage of cars as a prime example is understandable. However it is questionable whether this will carry over to many other areas of social life. A key assumption of Tabarrok and Cowen’s thesis is technological advances in reputation management. Cheating becomes less valuable when it comes back to bite you. But is assessing reputation in complex domains really a problem of technology?

See, for example, this scenario:

In Sandis and Taleb’s article, “bad luck” plays a crucial role in justifying their heuristic. But “bad luck” creates its own issues that make perfect enforcement inexorably costly. To get an agent’s “skin in the game,” the principal needs to be able to punish the agent for bad behavior. But rarely is behavior observed. Rather, it is some outcome on which the principal conditions their punishment. The outcome is to some extent a function of the agent’s effort, but it’s also subject to unknown randomness. In the builder example, the probability that a building will fail is related to the builder’s effort, but it is not inconceivable that an expertly constructed structure might collapse.

Principals get noisy signals of agent behavior. It is unclear whether an outcome is the result of poor decision-making or bad luck. This distinction may or may not matter, depending on the case. However, in many instances where it is difficult to observe the agent’s behavior, the optimal solution to the principal-agent problem still leaves the agent somewhat insulated from the costs of their actions.

In simple systems where outcomes follow from agent choices, this is not really too great of a problem. Let us assume car insurance companies can thoroughly instrument every aspect of the car to collect fine-grained data on your driving behavior. Yes, they cannot measure the other drivers on the road. But they know enough about your behavior to at least understand how you cope with a traffic environment that they can assume to be adversarial regardless of your actions. This is not really a model that transfers over to many other kind of social relationships:

As a risk management rule, Hammurabi’s code ensures that builders suffer the same costs as the owners. However, it also probably ensures an under-provision of houses. Suppose that even a house built by an expert builder has some risk of collapsing despite the builder’s best efforts. Knowing this, a builder suffers an additional lifetime cost of possible death every time he/she constructs a house. If the builder places some non-zero value on his life, he/she will choose to constrain the amount of houses that he builds even if there is demand for more. In economic terms, the death risk is an additional “cost” to production.

So, to some extent, certain classes of activity are complex enough that assessing them is difficult regardless of whatever gizmos one has. Take Tabarrok and Cowen’s authors’ glowing depiction of the Silk Road:

The Silk Road marketplace for illegal goods, for example, supported millions of dollars of exchange through a dual reputation system. On the Silk Road it was possible to pay for goods in advance of delivery or to buy goods which were delivered before payment was made. In each case, honesty was maintained through reputation even without legal recourse for contract breach.[4] Thus, in these cases reputation maintained quality even when theories of information asymmetry would have predicted the problematic nature of any exchange at all.

Sounds gravy, except for the fact that the Silk Road did not function in this way whatsoever. It was not a bottom-up, decentralized reputation system but one man’s fiefdom:

Ulbricht built the Silk Road marketplace from nothing, pursuing both a political dream and his own self-interest. However, in making a market he found himself building a micro-state, with increasing levels of bureaucracy and rule‑enforcement and, eventually, the threat of violence against the most dangerous rule‑breakers. Trying to build Galt’s Gulch, he ended up reconstructing Hobbes’s Leviathan; he became the very thing he was trying to escape. But this should not have been a surprise

If buyers do not have information on sellers and have little recourse when something goes wrong, sellers will have no willing partners. Hence a market for intermediaries forms, and with it the same coercive state or state-like organ that Tabarrok and Cowen dislike so much. If technology alone could solve information asymmetry problems, Ulbricht would have no need to assume the role of violent God-King of Bitcoin.

Humans deeply, deeply, dislike being measured and evaluated (unless they can control it)

However, even if we could perfectly assess or design a machine that could solve information asymmetry problems, would the human objects of such assessment allow us to do so?

One response to the piece made this trenchant observation: “Well, cars cannot vote on how carefully they are inspected. Reducing information rents when used cars are being sold seems like a no-brainer. Humans — some economists need reminding — are not machines. As such, they tend to have an opinion about the degree of scrutiny that is appropriate.” Indeed, another economist — Robin Hanson — made the observation that we enjoy making decisions based on prestige, wit, fame, etc instead of quantitative track record. We actively resist measurement.

If humans don’t like to be measured and especially don’t like to be measured by machines capable of judging them, asymmetric information still will be a problem for the significant future.

Let’s quote a bit from Hanson, who does well to explicate this:

As a society we supposedly coordinate in many ways to make medicine and law more effective, such as via funding med research, licensing professionals, and publishing legal precedents. Yet we don’t bother to coordinate to create track records for docs or lawyers, and in fact our public representatives tend to actively block such things. And strikingly: customers don’t much care. A politician who proposed to dump professional licensing would face outrage, and lose. A politician who proposed to post public track records would instead lose by being too boring.
On reflection, these examples are part of a larger pattern. For example, I’ve mentioned before that a media firm had a project to collect track records of media pundits, but then abandoned the project once it realized that this would reduce reader demand for pundits. Readers are instead told to pick pundits based on their wit, fame, and publication prestige. If readers really wanted pundit track records, some publication would offer them, but readers don’t much care.

This, of course, recurs in many other domains. For fairly obvious reasons, there is no one centralized database that we can use to make informed analyses of police officer-involved shootings. In many public organizations that do not make themselves responsible to unforgiving shareholders every quarter, objective measures of performance are difficult to find. This is why organizational “strategies” are so vague and impotent; if the organization created a policy and strategy document detailed enough to matter it would also be detailed enough to assess success or failure and hold people to account.

Or, additionally, take resistance to standardized tests. Yes, they are highly flawed and select for certain socioeconomic characteristics and other skewed backgrounds. But it is not clear that the alternative is necessarily better or eliminates bias. Qualitative “whole person” judgement criteria leaves a significant amount of discretion and thus a route for bias to creep in; Jews and Asians both have been victims of this in college admissions processes:

The only way to prevent [Jews from going to Harvard in large numbers], Lowell argued, was to impose strict quotas and restrictions. Ideally, Lowell wanted to cap Harvard’s Jewish population at 15% of the student body, according to Karabel. The size of the Jewish student body had quickly risen from 7% of freshmen in 1900 to 10% in 1909, 15% in 1915, 21.5% in 1922, and 27.6% in 1925. …
The plan was ultimately rejected by the Committee on Admissions — who Karabel writes were “reluctant to publically endorse a policy of discrimination” — but it reveals the explicit motivation for changing how Harvard chose its incoming students.
By 1926, Harvard moved away from admissions based strictly on academics to evaluating potential students on a number of qualifiers meant to reveal their “character.” A report released that year by an admissions committee endorsed a limit of 1,000 freshman per class — allowing a shift in policy, as Harvard could no longer admit every student who achieved a certain academic cutoff.

We hear all the time about quantitative and algorithmic criteria supposedly reinforcing political, economic, and social inequalities, but here it becomes very clear that one reason that individuals and institutions resist measurement is because measurement often results in outcomes they dislike and a loss of control. Jews scored well under the old system, so WASP elites decided to change it to allow themselves greater discretion to discriminate against Jews. Hence by resisting quantification, Harvard’s WASPs could preserve their own discretion to keep the Jews out.

One may also observe that Tabarrok and Cowen presume far more agreement about such matters than currently exists. For example, social welfare and affirmative action policies were Good Things to many Americans as long as they could deny the benefits involved to groups of their choice. When this option/implicit understanding was removed, being on the dole suddenly became subject to all sorts of onerous regulations that did not previously exist when the only people getting handouts looked like the very Americans trying to deny the new welfare recipients the very same handouts. In areas like school admissions, credit scoring, loans, etc quantification will never be seen as neutral because the very metrics involved are an object of constant dispute. And hence there is an incentive to resist measurement when the outcome that results is unfavorable.


So there are plenty of reasons to suspect that technology may not rule out information asymmetry (if it is, indeed, rooted in social factors) and that people will resist measurement that might improve information symmetry. But this says little about the technology side of the equation. Even if technology may not do away with information asymmetry, can it still improve our decisions? Or is it an inscrutable, harmful, out of control “black box?” Can technology one day, perhaps N number of arbitrary years from now, render this essay/rant irrelevant?

The problem is that the question itself is framed incorrectly. The prior sections talked a lot about noisy signals in social life and disputes over measurement/resistance to measurement. If complex social institutions and outcomes are difficult to assess, then it follows that the technologies themselves are not what is inscrutable, and disputes over algorithms really are disputes over what criteria of performance people want the algorithms to have.

Social institutions and knowledge are inscrutable, technology is the easy part

A lengthy quotation from an blog I usually enjoy quoting:

So throughout the 90’s and the 00’s, if not earlier, ‘AI’ transformed into ‘machine learning’ and become the implementation of ‘soft’ forms of knowledge. These systems are built to learn to perform a task optimally based flexibly on feedback from past performance. They are in fact the cybernetic systems imagined by Norbert Wiener.
Perplexing, then, is the contemporary problem that the models created by these machine learning algorithms are opaque to their creators. These models were created using techniques that were designed precisely to solve the problems that systems based on explicit, communicable knowledge were meant to solve.
If you accept the thesis that contemporary ‘algorithms’-driven systems are well-designed implementations of ‘soft’ knowledge systems, then you get some interesting conclusions.
First, forget about interpeting the learned models of these systems and testing them for things like social discrimination, which is apparently in vogue. The right place to focus attention is on the function being optimized. All these feedback-based systems–whether they be based on evolutionary algorithms, or convergence on local maxima, or reinforcement learning, or whatever–are designed to optimize some goal function. That goal function is the closest thing you will get to an explicit representation of the purpose of the algorithm. It may change over time, but it should be coded there explicitly.
Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.

So, in other words, “know-how is not interpretable so algorithms are not interpretable.” Recall earlier in this piece that it was noted that individuals and institutions resist measurement and quantification. While the prior examples almost all concerned institutions that hid their legibility and essence from the top-down (police departments resisting centralized databases for officer-involved shootings, Harvard hiding its discrimination against Jews, public policy organizations making strategies which make it impossible to see if they are strategic or not), this can also occur in a much less deliberate manner.

James C. Scott observed in Seeing Like a State that much knowledge is tacit, experiential, and localized in nature. The quest of the 20th century was one of authoritarian state-builders that tried to make social systems “legible” and in essence machine-readable. They searched for a way to make distributed, disorganized, and often highly elusive knowledge concrete and manipulatable enough for the grand projects of the High Modernism era. And, of course, they failed miserably.

Moving away from this to more practical concerns, organizations have tried to apply information technology and automation to help them make better decisions or automate decisions away altogether. A key obstacle to this has always been the gap between what the qualitative, fuzzy, and often highly domain-dependent nature of what the user wants (or thinks he or she wants) and the formal and restricted language of software engineering. Software engineering’s enormous amounts of overlapping fads, methodologies, models, and design methods is a tribute to the difficulty of doing so.

As the blogger quoted above notes, at a certain point expert systems fell out of vogue and the hot new thing became systems that only indirectly embodied the knowledge and goals of creators. In many respects this represents an working solution to the problems Scott mentioned. If the system could not be precisely specified enough, the organization would settle for an “soft” approximation. Computational intelligence and metaheuristic optimization bill themselves essentially as “break in case of problem that you don’t know much about but can still distinguish better or worse solutions within.”

Hence algorithms themselves are far from inscrutable, black boxes, or unknowable. To suggest otherwise is to indulge in the kind of crude and unscientific mysticism perennially popular in some precincts of the humanities and social sciences these days. However, it follows from the prior observations about the difficulty of measuring many aspects of individuals and institutions that the purpose to which any kind of technical system embedded within a social system serves cannot be determined by looking at the technical system in isolation. Any artificial agent’s behavior is a function of goals, environment, and larger situated context, and this holds true for a number of artificial artifacts as well.

For example, it is widely belived that Los Angeles’ urban geography reflects political and social faultlines and agendas. Yet note that the Amazon book the prior link directs you to is not that of someone that chooses urban planning itself as the level of analysis. James Diego Vigil, a sociologist of LA gangs and youth communities, does not need to go there. As someone with expertise in the social system that explicitly set the goals of the technical system, he can confidently explain how social and political goals and prejudices of LA city fathers and power figures resulted in today’s unique LA urban geography (and thus the social result he is seeking to explain).

Had he have been an “algorithms” opponent, he would be engaged in the quixotic pursuit of “reverse-engineering” technical blueprints and artifacts for proof of some structural inequality instead of examining history books and primary sources that would have easily suggested some degree of malign intent to LA urbanism. He would have likely struggled to understand the overall social meaning of pure technical endeavors without reference to the institutions and individuals that ordered said endeavors.

People have incoherent beliefs about technology and human control, what we do know is that they if they want technology they also want it to do their bidding.

Opponents of algorithms can’t make up their minds about what they dislike most about data-driven systems that automate human decisionmaking. Is it the loss of human control, or the idea that human bias may seep into the code?

If the concern is bias, then algorithm opponents should want to ensure that humans and their petty --ist/ — ism biases do not corrupt the decisionmaking system by depriving humans of agency and control. Decisionmaking should be as random (as opposed to biased/deterministic) and indifferent as possible, even if humans struggle to understand the outputs of such a system or the decisionmaking process it uses. Indeed, Hastie and Dawes argue that simple linear models routinely outperform experts in most, if not all, domains. Indeed, Mitch Turk has tirelessly pointed out that autonomous cars will result in better decisions and less negative consequences than human drivers, and any shock and horror we feel about them would be a function of how efficiently they optimize our unspoken moral choices (in which case, its us not the machine that is the problem). If anything, our prejudices and biases are the “weak link” in the chain, not the machine.

Of course, algorithm opponents often make emotive appeals to human fears of automation and loss of control — overrating the potential for machinic harm and underrating the lethal implications of human frailty and biases. They talk about being able to understand and control the algorithms and lament the technological conquest of feeble humanity (a fairly ridiculous proposition given that, as Evgeny Morozov points out, a lot of criticisms of technology are really sublimated beefs with capitalism and larger social issues as opposed to the technologies themselves). Returning back to the autonomous car, though, this suggests some hope for algorithms opponents. Putting the human in the loop with the ability to guide and bias the machine can correct for the “creepiness” involved. Anything that the human can do to make the system more understandable, predictable, and controllable is great! But if the problem is political and social bias, then this just re-introduces the human bias back into the equation.

The contradiction is resolved as follows: if they can bias the system to make decisions in a manner congruent with their ideological preferences, they also retain human control. Of course we’re back at square one — a machine that makes decisions with consequences based on some kind of formalized criteria that is biased in some way that has more to do with the human designer’s preferences. And mainly left-leaning algorithms critics will be rudely surprised to find that — surprise — those preferences aren’t shared by a large body of people outside their ideological circles. Said ideological opposites are likely to have the same objections (“the machine doesn’t do what I want it to! Algorithms are creepy!”) and the circle will just start all over again.

In fact, hearing the phrase “[w]e need to have a conversation about the social and political implications of technology!” is banal and useless because it assumes that the questioner’s desired conversation outcome is the same as everyone else interested in the conversation. The questioner is unable to realize that, as an activity, dialoguing about technology’s social and political implications is not equivalent to the reification and validation of their ideological and policy preferences.


I have attempted to answer here the question of why asymmetric information seems to be declining due to technology while many are convinced that technology is posing new, dangerous, and creepy potentials due to its apparently inscrutable nature. I do not believe this answers the question definitively, but it suggests some possibilities:

  1. There are some aspects of information asymmetry and noisy signals that technology alone isn’t going to fix (for now). Additionally, people and institutions do not like being measured and will seek to actively thwart measurement and accountability at every turn. Hence it follows from both that there are big problems to naive assumptions of technology leading to symmetric information. It does not bode well for Tabarrok and Cowen’s argument that one of their primary success stories in reputation-based information symmetry (Silk Road) is actually nothing of the sort.
  2. Social and political knowledge is (surprise) inscrutable without deep subject matter background and institutions are willing to take shortcuts to represent “illegible” knowledge and goals as approximations in computer programs. The programs themselves are not black boxes, they are well-understood and often highly simplistic sequences of code. What is a black box or so mysterious is the nature of the social contexts that technological artifacts are embedded within. Mysterious, at least, to those that lack understanding of said social contexts or are willing to deliberately disregard these contexts as units of analysis so that they can write paper after paper about why Big Data Is Bad (TM).
  3. Whether or not algorithms or any form of automation improves human decisionmaking is a fairly useless question. Decisionmaking for what, and what’s the criteria? Much of what I have read on this subject is incoherent. Critics somehow dislike IT systems for being vessels of sociopolitical biases yet nonetheless also dislike the idea of those systems acting in a way that would lessen human control but also compensate for human bias. But a system that satisfies the ideological/normative goals and preferences of some critics will by definition trigger rebukes and suspicion from others. And its also silly to expect institutions with firm ideological and functional priors to change them regardless. Google may be famously sensitive to the idea that it should not be “evil” in how it makes money, but it is also exists to make money. If your beef really is with consumer capitalism, then there’s nothing Google can do to help you, and you should stop wasting their time complaining about their algorithms when its their very purpose as a company that so offends you.

In terms of practical policy solutions, (1) is probably most amenable to something that a diverse group of stakeholders could agree on. (2) and (3) are far more problematic.