Who Decides AI’s Role In Human Governance?

A Rejoinder to Professor John O. McGinnis

Ken Grady
The Algorithmic Society
8 min readJan 11, 2018

--

Marinus van Reymerswaele, The Lawyer’s Office, 1545

The rapid development of artificial intelligence software raises a fundamental question: who decides whether AI or humans will set the rules of law by which societies are governed? The emergence of AI that can “see beyond” what humans see — find patterns that are so subtle or complicated humans cannot discern them — has led to the meme that AI will overpower humans in many areas, including certain “formal sub-fields” of law. Accept that meme and you have taken a big step in the direction of AI setting the rules.

In the pattern recognition business (the core of AI), AI will top humans with ease. AI accesses more powerful sensory arrays than human sensory arrays (e.g., eyes). AI compute power already exceeds our biological compute power. And, AI accesses data storage in greater volumes, greater recall accuracy, and at greater speeds than anything we can do. Of course, recognizing a pattern and understanding that pattern are not the same.

John McGinnis, George C. Dix Professor of Constitutional Law at Northwestern University’s Pritzker School of Law, posted a short article about the booming power of AI and its potential impact on law:

4. Law may not be a completely formal system but is [sic-it] has some formal subfields. Think about the rule against perpetuities and some areas of tax. Lawyers in these areas may well be machined away, as it were. Lawyers would be well advised to go into areas where law is fuzzy and politically driven. Administrative agency promulgated rules may be a good area.

5. In a previous essay, I suggested that in most areas of law a computer and a human would be better than just a computer, because even in the formal domain of chess computers plus very good humans could beat a computer because they complemented one another. But it is not all clear that humans could much improve on Alpha Go and there will be some domains of law that may become almost wholly the province of AI.

Sounds like sage career advice. McGinnis is on stable ground by repeating the new saw that lawyers plus data are better positioned to provide advice than lawyers without data. It is tempting to accept both statements uncritically, but if we pause for a moment we can see the flaws.

First, let’s consider the statement that law students should head for administrative law career safety, but recognize even that shelter may be temporary. Is tax a formal sub-field ready for subjection by AI? I would answer no. Certainly, in all areas of law (even constitutional) there are places where rote automation could occur. Perhaps tax has more of those. Rote automation is not AI. It is simply recognition of something that has always been part of law. There are discrete tax computation issues where judgment and discretion have been worn off. They are small and hardly worth talking about. That my gross income equals the sum of various numbers does not make have much value in this conversation, even though tax has more of those formal points than administrative law.

The larger question is how have we embedded our values in the tax law, and who decides values questions when they are encountered? The recent federal tax code debacle (ahem, revisions) are a timely example. Calculating your new taxes based on the changes will eventually, once all the errors in the bills are fixed, slide toward formal. But, the real issues in the new tax law — certain to provide income to current and future generations of tax lawyers — are the value issues that were just put into the law. The plain truth is that no one know what values were embedded or how they will clash with ones already in the law. Which deductions survived and in what particular form will take years to sort out as regulations are written, interpreted, litigated, and re-written.

Under McGinnis’ Alpha Go view, the job of sorting this out could go to the rookie, the AI. What will the AI use to make the value judgments? Will it model them after human value judgments? Which ones — and how will it choose? Will it go into battle with biases built in (under the way we build AI today, most certainly)? The formal sub-field of tax law just became a pool of ambiguity.

Second, let’s consider the statement that lawyer plus computer is always greater than lawyer without computer. By it, I think, McGinnis means the lawyer’s judgment is improved by the vast quantity of data crunching the AI can provide. I can analyze the 30 cases I’ve seen in my lifetime; at the same, time the AI examines 30,000 cases. Bigger data set, better analytics.

Yet, the bias issue will crop into that simple discussion again. Is the biased algorithm being used in the AI — an algorithm I did not design and perhaps neither the designer nor I understand — better or worse than not having biased data? Will reliance on data erode understanding data? Does law lend itself to large data analysis, when its history has been based on small data sets? Lawyers for years defended the notion of locality. If you wanted to litigate in the state courthouse you needed to be a member of the local bar regularly practicing there, or you needed to partner with such a lawyer. Law circulated around each court in the form of decided opinions, but also in the local court customs. How will a data crunching exercise of all state court opinions on a certain clause capture those localities?

These types of questions lead us to a bigger question, one that McGinnis seems to ignore. His key point is that AI in many forms, and perhaps in the Alpha Go type form, does not bode well for human jobs. It is part of the central debate. Will AI displace humans on the job site? Yes and no, but let’s stick to lawyers and I would say — only if we make the grievous mistake of letting it.

We Already Have A System To Capture and Codify Human Values

In his 2016 Hochelaga Lecture Centre for Comparative and Public Law
Faculty of Law, University of Hong Kong Lecture
, Chief Justice Allsop AO of the Federal Court of Australia began with the simple declaration: “Law, at its very foundation, is conceived and derived from values.” Chief Justice Allsop goes on to talk about how human values lie in many places, not just codified law. His statements provide an important reminder. For centuries, humans have built, studied, written, and re-written schemes that capture our values to the extent we deemed necessary to govern ourselves. We have value systems, and that is not our problem. That we have messy, incomplete, poorly written, and constantly changing systems presents many challenges. But we shouldn’t let go of the central truth that we do have them.

Lawyers have been key actors in the value system process and have developed a wealth of knowledge about what works and what does not. Imperfect, but valuable knowledge. Why are we encouraging ourselves to abandon that knowledge, let AI take over and build some new value systems to displace what we have already? Why give up tax law and retreat to administrative law? The threat to many lawyer jobs is our failure to prepare students and lawyers to handle or even to create the jobs that will be needed. We continue to train lawyers for the 20th century jobs that won’t exist.

We can see these questions rising as efforts outside and inside the legal industry to build new AI-based value systems grow. A new grant competition funded through the Future of Life Institute, subsumes them under the popular topic of “values” in software development. The core idea of the values AI movement, started by Stuart Russell, is that we try to limit AI by embedding humans values in the software. The Asilomar AI Principles have become the intellectual embodiment of the direction of this movement. Of the 23 Asilomar Principles, numbers 6 through 18 in particular attempt to state some ethics and values that should be the focus of research and development to shape AI from “undirected intelligence” to “beneficial intelligence.”

IBM research scientist Francesca Rossi says, “there is scientific research that can be undertaken to actually understand how to go from these values that we all agree on to embedding them into the AI system that’s working with humans.” But we don’t all agree on values. They differ along many dimensions and geographies. We also differ on the degree and nature of codification. We don’t codify many aspects of our culture, yet those aspects of culture reflect our values. While codified laws may prohibit shouting “Fire!” in a crowded movie theater, the vast majority of us refrain from doing so because of cultural values in not being offensive. The codified law in that situation is an afterthought.

Look closely at these discussions, and our largest, most comprehensive value system built over hundreds of years is absent from the discussions. No one notices the law. No one discusses why it is an inadequate foundation for AI, or even stronger — why it isn’t the value system for AI.

Humans Should Decide AI’s Role In Governance

Before I leave this post, I should answer the question: I believe humans should decide what role if any AI plays in human governance. I will take that one step further. Rather than assuming AI should ramble through the law looking for value patterns, we should consider an “AI free” zone. A sandbox of law where humans work out their values and how to put those values into our existing systems. AI connects at the output as the values emerge.

So, Professor McGinnis, before I suggest my wife (a proud Northwestern Pritzker School of Law graduate) give up her focus on tax law and before I (another proud graduate) turn my focus to administrative law, I suggest we attend to the bigger picture. Why not employ lawyers in exploring how we can improve all aspects of our current value systems, including that large body we call “law” and refrain from asking Alpha Go or other AI to build us new systems? Let’s not fall into the data scientist trap of believing that because we can teach AI tricks, we should teach it those tricks and then “see what it can do.”

We are going through a period where “lack of leadership” is a phrase repeated daily. Ben W. Heineman, Jr. has encouraged lawyers to take leadership through roles such as the statesman-lawyer. Lawyers have a unique vantage we use to study and implement values through leadership. It may be difficult to define it, but it exists. The sooner we embrace it and use it to shape the debate about future value systems in an AI world, the sooner we move toward healing leadership. That would be a valuable use of law student and lawyer time, and, I would argue, more meaningful than the ceding tax law to AI and arguing administrative law is a more productive place for someone’s near-term career.

Ken is a speaker and author on innovation, leadership, and on the future of people, process, and technology. You can follow him on Twitter, connect with him on LinkedIn, and follow him on Facebook.

--

--

Ken Grady
The Algorithmic Society

Writing & innovating at the intersection of people, processes, & tech. @LeanLawStrategy; https://medium.com/the-algorithmic-society.