Artificial Intelligence, Rules of Origins and the Lemons Problem

Nicolas Petit
7 min readAug 23, 2018

At law school, one of my professors once asked “Assume you are a sovereign king. You have power to write and rewrite the law at will. What do you do?

This was my first encounter with “normative” law. Unsophisticated scholars like me define this as what the law should be. This is distinct from exercises in “positive” law where the point is to elucidate, and explain, what the law is (the bread, butter … and caviar of legal practitioners).

Why the legalese?

Because a real life exercise in normative law is today before us. With the rise of Artificial Intelligence in society (“AI”), we face a hard normative question: what laws should govern AIs and the machines that embody them?

There are myriad ways to discuss this. The lazy me likes to think that two approaches really stand out. The first starts from the black letter law. It conjectures points of friction in the legal system when AI applications are rolled out. Common examples of this include questions such as: is a factory floor AI-powered robot a labor force worker or a piece of capital? Can a robot-worker be party a labor contract? Can a robot own IP over inventions discovered on the job? Can it congregate with other robots, form a trade union and conclude collective bargaining agreements? Can a robot worker be owned, taxed, sold, or fired?

The upside of the black letter law approach is obvious. It gives us a stepwise method to think about normative AI law. All laws are indeed “object” oriented (or “formalistic”). For example, they refer to a “contract”. This concept is itself defined as a convention concluded between “parties”. And parties are either “individuals” or social or economic “groups” like corporations, NGOs or nation states. The black letter law approach is thus a good compass to methodically discover legal gaps and redundancies as well as conceptual rigidities and interpretive inflexibilities. This is especially true in civil law and continental legal systems, where the law is more often specified through structured “rules” rather than pure “standards” that define abstract goals (like fairness, wellbeing, dignity, etc.).

But the black letter law approach has a downside. It does not catch emergent AI behavior. To take a fictional example — no Westworld spoiler intended — what if several factory AI workers start a non-human language based conversation, and conspire to organize a socialist revolution? We have no black letter laws with specified objects designed to proximately or remotely regulate non-human language communications on the workplace. And general principles of law do not give us a ready-made answer to the normative question: should we prohibit this?

This is where the second approach helps. Here, the idea is to operate just below the skin of black letter law. The point is to explore the entire legal system in search of design patterns and transversal properties of text and case law. To use a computer science metaphor, we reverse engineer the first principles underpinning the graphic user interface of legal prescriptions. Those properties often give information on the big idea pursued by lawmakers, and help overcome the ambiguity of the explicit goals often loosely and superfluously expressed in the boilerplate provisions of every and any judgment or preamble of a statutory instrument. To come back to our fictional example, we can observe that our laws accept the production and commercialization of machines like computers, tablets and cellphones which use binary machine code that most humans are mostly unable to read. We can thus derive from this that such communications are a priori lawful, and that it is not necessary for all of us citizens to understand them. At the same time, unlike binary code which is only readable by a few, the way AIs communicate with each other may be incomprehensible for everyone. Does this difference require stricter laws? The answer to that question is not obvious, but the second approach brings you one step closer to it.

Once the basic properties of our laws are mapped out, we can question whether AI idiosyncrasies require additional law creation (upholding those social choices in new legal instruments).[1]

An illustration of this exercise relates to “rules of origins”. In our legal systems, a whole host of rules effectuate a first order social demand to know the “who”, “where”, “how” or “why” behind an output. Obvious examples are the rules on mandatory labelling of food products (eg, GMOs), locational requirements like the “made in China”, or the GDPR duty to disclose “automated decision making”.

IMHO, AI augurs a promising future for “rules of origins”. In an AI centric world, the demand for man-made outputs will grow, due to individuals’ valuation for craft, trust, legibility and projection.[2] We actually know that economic agents display such preferences for a long time. Since the 1970s, the predominance of films featuring human actors over “computer generated imagery” animated movies is a case in point.

At the same time, the supply of AI outputs will expand in a broad array of areas from journalism to recreational arts, from the pricing of retail goods on e-commerce platforms to troubleshooting call centers.

To date, few of those domains are covered by rules of origins that prevent opportunistic suppliers to fool users into the belief that they are buying man-made products. They should. Economics 1.01 tell us why. Unless enforceable rules of origins are adopted, markets will not generate clear price signals that differentiate man from machine made outputs. We will end up with a “lemons problem.[3] When there is imperfect information, potential buyers of “high price” man made products will discount their maximum purchasing valuation by a discrete amount to internalize the risk of being sold “low price” machine made products. Say man-made books are worth 20€ and machine made ones 10€. If buyers believe there is a 50% probability that the book has been written by a machine, the market equilibrium price will be 15€. The upshot is this: no publisher of man-made books will come to this market. By contrast, suppliers of machine made books will make a killing. In the end, lemons problem of this kind inefficiently discourage the production of man-made outputs. Black mirror conjecture here: the end game could be human joblessness.

The good news is: the legal system is here to remedy market failures of this kind. Private or public ordering institutions can design rules of origins that promote the provision of optimal information on markets. A man v machine made label is an obvious example. But more specific rules of origins may have to be invented in situations where users value more accurate information on the particular AI technique or dataset employed.[4] Similarly, when hybrid outputs are concerned, buyers may display different reservation prices depending on the output’s man v AI mix.[5] Further, to assist consumer choice, sellers of AI-made outputs could be required to provide information on the next best competing man-made alternative. At the extreme, quotas, tariffs or other quantitative restrictions on AI made outputs may be necessary to maintain a reference man-made product price cap in the market place.

Of course, markets (and AI) are perfectly apt to generate such information practices. But they are not a stable equilibrium due to collective action problems. In competitive markets, profit-maximizing suppliers have incentives to cheat on soft commitments. Enforceable contracts and property rights, industry or Government-led standardization or regulatory intervention are thus needed to keep everyone on the line. In California, a law was introduced that makes unlawful for a person to use a bot to communicate or interact with another person without disclosure.

Even more importantly than market place efficiency, rules of origin matter in areas where States provide public goods to society. Think about the justice system. Hearings are in principle public for a reason: seeing judges and juries decide cases gives us knowledge of the who, how, why and where justice has been handed down. In other words, justice is kept under close eyes. Is this still true with automated justice? When law enforcement is embedded in computer code, and judicial decisions delegated to unfathomable deep learning processes, natural and legal subjects lose understanding of the origination of justice. Arbitrariness, or the perception thereof, is the outcome.[6] Legitimacy is the loser.

This post was also published on the platform of the European AI Alliance

[1] Assuming invariance in collective preferences.

[2] As economist Mogstad recently said, automation “may very well create demand for service with a personal touch”. See https://www.wsj.com/articles/short-of-workers-fast-food-restaurants-turn-to-robots-1529868693

[3] Akerlof, George A. (1970). “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism”. Quarterly Journal of Economics. The MIT Press. 84 (3): 488–500.

[4] A left handed person may prefer to train with a virtual tennis coach that has used mostly training data from left wing ATP tour players.

[5] Customers’ utility function may change drastically depending on whether there’s a human pilot in the plane.

[6] Though I fully acknowledge that use of AI assistants in the justice system help correct existing and documented biases of man-made justice.

--

--

Nicolas Petit

Prof Uni Liege, Belgium and UniSA, Australia. Visiting Scholar @Stanford Uni Hoover Institution. All things tech, antitrust, law and economics.