Class 1 : Society, Risk, Technology, & Law

Small Choices, Have Even Bigger Outcomes in AI & Law

Michael Fischer
Stanford Law: Regulating AI

--

By Michael Fischer & Shreyas Parab

Notes from Class 1 on Regulating Artificial Intelligence at Stanford Law School

What is AI?

People may be able to relate to the profound linguistic and psychological outcome that occurs when repeating a word many times to ones’ self. (If you are unfamiliar with this experience, I highly recommend you repeat the word law to yourself). This concept is known as semantic satiation and not only applies when saying a word or phrase, but when it undergoes extended “inspection or analysis” (Wikipedia page on Semantic satiation). So how does this relate to AI in any shape or way? Well, we must acknowledge how the constant use and analysis of this phrase often causes semantic satiation. AI has become this “buzz word” becomes so commonly used in such a variety of contexts that it loses meaning. For the purposes of this class and these notes, we will foreground our working definition per Stuart Russell and Peter Norvig, two leading AI researchers and thought-leaders. Their definition is “the capacity to undertake functions that, if performed by a human, would generally be understood to require ‘intelligence’”. It is important to distinguish the variety of ways artificial intelligence can be understood.

In addition to AI there are other terms such as machine learning, and deep learning that lurk in the background. AI can have both broad and narrow meanings. Sometimes AI can refer to “general” artificial intelligence, whereby the computer has the capacity to understand any intellectual task that a human can and has a full range of cognitive abilities. Other times AI means “narrow AI” that is focused on doing one task, such as identifying animals in a picture, playing chess, translating between languages. The idea with narrow AI is that there is a well defined problem and the computer calculates a specific and well-defined answer. As computers get better and with each year, narrow AI gets more broad. While some used to think of a computer playing the game of Go as being something that only a human could do, and part of general AI, once a computer learned to play Go better than a human, people were quick to say that actually solving Go should be considered part of narrow AI. As AI accomplishes tasks that were previously thought to be uniquely human, the bar is raised. The upshot being that boundaries between narrow and general AI change over time. But at some point, they will overlap. Most believe that it will come eventually, but determining when, is the billion dollar questions.

The revolution on narrow AI is well underway. Thousands of companies and billions of dollars of value have been created with narrow AI already. But there is a broad sense that what lurks in the background is the coming of general artificial intelligence, at some point.

Thus we have two different branches that we will be able to talk about when speaking of regulating AI. The first is the short term view, with the narrow AI technology that we have now and how will it be intertwined with existing laws and regulations. Examples of how narrow AI will affect things in the short term are: automatically setting bail for a prisoner, limits on facial recognition technology at the border, and detecting insider trading on the stock markets.

Then there is the long term view, about what will happen when general artificial intelligence comes and how it will interact with new or existing laws. Will we try and shoehorn it into existing laws or will there be a new set of laws that are needed. General artificial intelligence will have long term consequences for: the future of work, mental health, humans sense of purpose, and the future of the human species. With both the short and long term views, we seek to examine what to expect and how to best prepare for their eventual consequences.

What are the opportunities and risks do we see in AI? AI is seated to have a positive impact on the world. If there were no upside we would not be talking about it to begin with. Many technologies, guns, nuclear bombs, computers, and encryption can be used to help and hurt humanity. With AI we need to measure the opportunities, and associated risks that we are willing to take as as society. We come across these choices everyday both personally and from a regulatory perspective.

For example, what is a reasonable speed for driving on a highway? The first question we should ask is what do we mean by the term “reasonable”. Each of us has different definitions of the word. Highways can technically have cars travel on them at speeds of up to 120mph, but most of us would not consider this to be reasonable. What is the risk tolerance that we have for ourselves? What are other people’s risk tolerances? How do we come to an optimal outcome? We regulate how fast we go on the freeway so that we have a common definition and understanding about what is the correct speed.

Enforcement can come in the form of many different means. The passenger of a car can tell you to speed so that they can get to the hospital more quickly, a parent might tell you to drive safely before leaving on a long trip, or a cop might pull a car over. They way in which regulation can be enforced can happen from a variety of means.

What is regulation?

Regulation in the context of this class is defined broadly as how society deploys legitimate authority to structure relations among people, organizations, information, and the physical world. We often confound regulation, rules, and government into one massive category that frameworks society, but indeed they are distinct, albeit similar things. Regulation does not just have to do with specific rules imposed by government agencies, but the method in how power manifests in our country and humans. It is not just the government that regulates, it is the citizens that regulate themselves as well. We call this bottom-up regulation, “norms”. There is no specific rule that the government tells us in order to get Americans to say “bless you” after someone sneezes, but yet some still feel the need to do so out of politeness. Even if someone would not individually say “bless you”, people feel a compulsion to do this because that’s what society has determined as the norm.

There is a difference in what regulation seeks to do and how it is implemented. The force that is regulation and how we govern and enact that regulation into society are distinct processes. Regulation is both the object itself and the process in which that object manifests. We govern through: constitutions at a federal and state level, statutes, precedent set by judges, the decisions of government agencies, and finally the aforementioned norms. However, we must change how we govern based on one key consideration: political economy.

Political economy in the modern context examines how the political institutions operate, the nature of the political environment, and the economic incentives at play in almost every single decision made at scale in society. In less formal terms, political economy is a game of 3-dimensional chess, the contextual backdrop that influences a decision that is part of an interconnected stream of decisions and power structures both in the past and in the future. The political economy is not one Justice Cuéllar is unfamiliar with having worked across the public sector with government agencies, the White House, and the judicial system. Political economy can be an extremely convoluted system with a myriad of variables at play with an even larger myriad of outcomes.

We’ll use a series of small case studies and thought experiments in order to better understand some of the questions that the intersection of technology and law raises and how political economy comes into play.

Case Study: Home Intrusion

A woman and her four year old son live alone in a neighborhood in a suburb with a decent number of people passing by and sitting on their porches. It is the kind of neighborhood that you can wave to your neighbor as they garden or as they watch their kids play in the yard. One day, during mid-day an intruder enters the home and upon finding the living room occupied by the woman proceeds to kill the woman before running away. Her son hears the commotion and comes from his room and sees his mom lying there in a pool of blood. It takes him several minutes to realize that something is wrong and what he should do. It takes him several minutes to run over to a neighbor’s house, explain what is wrong and get them to follow him to see his Mom’s state. 12 minutes from that point, the police and paramedics arrive only to declare the mother dead from blood loss.

In a case like this, time is of the utmost essence and unfortunately we could not have done anything to save this woman’s life. Or could we have? Rather, could technology have intervened at any point through this process to offer better outcomes?

When prompted to brainstorm how technology could have played a role, students quickly offered up solutions like: “an in-home monitoring system could have identified that someone who didn’t live in the house had entered with a weapon” or “cameras throughout the neighborhood could have spotted the assailant coming into the home with a weapon and flagged it for police to be deployed to investigate further” or “if the mother had a one-press emergency alert system that the child knew to press” or “if the mother had a voice-activated emergency system perhaps she could have screamed the word before being murdered.” All of these voice-based or camera-based technologies might have indeed helped save that woman’s life either by preventing the crime from even happening by alerting police of someone carrying a knife in the neighborhood taken or improving the response time from the police to know a crime had taken place.

These kinds of solutions, however, raise significant questions on the role of police in the daily lives of citizens, the privacy of the home surveillance, along with a myriad of legal and ethical questions. We see here the trade-off between safety and autonomy, between surveillance and privacy. Perhaps technology could have saved that mother’s life, but in order to do so, we’d have societal trade-offs.

Even without regulation we are seeing systems being deployed already. Amazon’s Ring Video Doorbells have an HD camera that can be shared with local police departments. Already there are 400 police departments can view footage from the cameras. If the government were to create such a network of camera itself, then it would be subject to more legal scrutiny. However, that is was developed in private and shared with the public police departments allowed it to forward without much public examination.

Case Study: College Campus Mental Health

In 2009, a UCLA student working in a lab was stabbed by a classmate. UCLA officials knew the assailant suffered from mental health disorders like paranoid delusions and auditory hallucinations. On top of that, the student had been kicked off campus housing and had already signaled his disdain for the victim to an employee of the university. According to UCLA which has a special team focused on identifying potential threats to student safety at the school and the assailant was already being “closely monitored”. The California Supreme Court ruled that public colleges have a duty to protect students from foreseeable violence in classrooms and other places where school-related activities take place. Thus, the school now has liability to protect its students from other students who are a known, identified risk. This was a ground-breaking decision for public universities who argued that this put an unmeetable burden on them to monitor each and every student, while leaving the door open to discrimination against mental health disorders.

Now, with the rise of predictive analytics and machine learning to recognize patterns of behavior, perhaps the university could set into scale a large database that flags erratic behavior like missing classes, reports of violence, integrations with the school clinicians ranking their patients on the likelihood of inflicting harm on themselves and others. For example, perhaps this assailant had not been a demonstrated risk in his sessions with the school psychologist or psychiatrist to meet the threshold, but combine that with the other factors and it was clear the student should be prevented from even being able to enter the campus. Now, there comes into questions about discrimination, profiling, and overreach of monitoring performed by the public university, which is an arm of the government. Even more so, it calls into question how much we should rely on technology to ascertain what is “reasonably foreseeable” as the courts instructed the universities? If there is a technology that is more accurate than humans at detecting people that are at risk to their classmates, is the school obliged to use the software over the recommendations of their own personnel?

Case Study: Pretrial Detention

Many judges across America use a pretrial risk assessment software that helps them decide whether or not to release an individual based on a variety of factors. In this specific case, a judge used a pretrial risk assessment to ascertain whether an individual who had been charged with attacking a woman in her apartment should be released awaiting trial. The judge, based on his experience and judgement decides that this individual is fit to be released before trial, ignoring the recommendation from the pretrial risk assessment software which suggests detaining the accused. Right after this decision is made, a video of that attack goes viral and there is public outcry saying the judge acted incorrectly for releasing the accused.

Of course the judge feels public pressure and reconsiders his decision, but if the judge has no discretion and is forced to simply follow the recommendation of the risk assessment software, then what is the point of that judge at that point? If she or he is simply just reading the recommendation from the software, surely even that could be automated. Even bigger questions arise as to whether that pre-trial assessment is fair or just per the Constitution which calls into question what “due process” means when all these variables are thrown into an algorithm, usually a blackbox of human understanding and then outputs an answer without much signaling as to how it got there. Due process is the constitutional right to have an outlined, procedural process that is remained constant across people and is applied in the same way across the board and ensures that each individual retains rights like an unbiased tribunal, notice of the proposed actions, ability to know counter evidence, the opportunity to be represented by a legal counsel, and things like the courts providing a record of the evidence and facts of the case with its rationale behind its ruling. Algorithms follow some of those constraints really well, but on the ability to untangle its own reasoning and weighting system is something that is just not technically possible for artificial intelligence.

Case Study: Deepwater Horizon

In 2010, while the professor worked at the White House, the Deepwater Horizon explosion happened in the Gulf of Mexico. As many may remember, the incident involved an offshore oil rig which essentially sends massive drills down over 5 miles of ocean and Earth to bring back up oil deep within the Earth. Of course, deepwater drilling is incredibly dangerous and risky. The Horizon deepwater drilling rig suffered an explosion underwater that blew out a well leaking 210 million gallons of oil into the ocean. It is considered one of the worst environmental disasters in American history.

Yet, similar to the UCLA safety officials in the previous case study, there were several reports that should have been flagged such as a well fracture, troubling readings from monitoring devices, high levels of mud displacement, and the failure of a blowout preventer. All of these factors suggested negligence on the part of BP not acting on these identifiable risks.

It calls into question if better technology had captured all these disparate variables and risk factors whether BP could have better been held accountable by regulatory agencies who had the power to PREVENT natural disasters before they happen. As we saw, it took months to resolve the leakage of the oil into the ocean and the effects were catastrophic for its environmental, societal, and economical impact. Even almost a decade later and communities are still affected by the disaster. What if predictive technology could have considered those risks and identified the problem before it turned into a crisis? What are the other factors at play that the predictive technology would have to capture to also understand the political economy at play?

Case Study: Vietnam War

As we move further away from the legal system into considering the multi-faceted, hard-to-model situations in the real world, we studied the example of the Vietnam War as a textbook example of political economy affecting the algorithmic approaches and its shortfall in this kind of political socio-cultural environment.

Perhaps one of the most controversial wars in American history, the Vietnam War served as a reminder to America the price of freedom and intervention and the sometimes dubious returns. When scholars study the Vietnam War, they ask why the US intervened? Why did the U.S persist in a course of action that was likely to result in failure? How could Vietnam win against a technologically and economically superior adversary? What does this tell us about how government makes decisions based on geopolitics?

In order to understand the Vietnam War, we went back as early as the late 19th century examining French influences in the area, the post-Great Depression era that FDR helped usher in, and the role America played in international politics at the time. We saw how differing priorities by JFK and the opposing Ho Chi Minh, Vietnamese General leading the North Vietnamese forces caused a rift between the forces that set the tone for a long battle. Both sides were optimizing for different things and it was clear that there were political and organizational constraints at play beyond the issue itself. Johnson had a busy slate of domestic programs that were at risk based on the perceived weakness in the face of the Vietnam War. On top of that, Johnson had a presidential election upcoming with most of his political opponents using the Vietnam War as leverage against him.

Even more so, his key advisors were continuing to advocate for the war. One such advisor, Robert McNamara who was the US Secretary of Defense from 1961 to 1968 focused his strategy and advice based on quantitative observations and computer models. As the age of information started to dawn, McNamara was infatuated with modeling the war in terms of computational models and projections. In 1962, McNamara told the press, “every quantitative measurement… shows that we are winning the war”. He advised the President to double down on the war because according to the computer models, the US was nearing the tipping of the scales needed to win the war. Unfortunately, he was incorrect. Years later he admitted that the computer models and statistics that he had been so adamant to following were “grossly in error”.

In fact, this led to the naming of “The McNamara Fallacy” which is a mathematical and computation fallacy in making a decision based solely on quantitative observations and ignoring all others. The Vietnam War was an implementation of computational models gone wrong. The computer had no way of accounting for all the factors at play, the political pressures, the strength of guerilla warfare, and the millions of other variables at play. The real world is hard to model and when you try to do so, you often risking so much more than using your human intelligence.

Opportunities, Risks, and Limitations

What are the opportunities and risks that we have through the use of regulation. Even if we know that a problem is coming, regulation does not solve all problems. We take for example the issue of global warming. We have known about the effects humans can have on their environment for over 100 years. Moreso, in the last 30 years, more than half of global emissions have occurred. Yet, collectively we were not able to develop a solution to regulation.

Global warming has been known about for 100 years and will probably take another 100 years before we really start to see the devastating effects. On the other hand, AI has really only been around for 10 years. Within the next 50 years, we will start to see the devastating effects from AI. Comparatively, on a rate of change basis, AI significantly faster moving and more dangerous than climate change. If we are not able to rally around climate change, will we be able to rally around another large amorphous issue such as AI?

Policy changes at politics change. After the Vietnam War, there were severe economic costs and political isolation. There was political turmoil within the United States and Vietnam. As the Vietnam war was winding down, the U.S. military started to move money from being developed from weapons to developing the internet. After the Cold War there were many concerns about building a robust and distributed network that could withstand a nuclear war. This was originally called ARPANET but soon became what we today know of as the internet. In addition to developing networked computing between mainframes, the world was starting to develop the idea of running the personal computer. The personal computer would take the place of the mainframe and allow for the development of the software industry. Software began to “eat the world”.

Industry that have been stagnant began to be disrupted through the creative use of software. These older industries are typically ones that are the most regulated. Take transportation and Uber or hospitality services and AirBnB. Cyberpunks saw the internet as their salvation from regulation. John Perry Barlow, wrote a piece “A Declaration of the Independence of Cyberspace” in which he stated that, “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” He continued that, “Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.”

Barlow looks at cyberspace as being separate from society. The internet is its own place that is removed from people. Tech will solve all our problems and will remove the need for law. Does this conflict with what we understand the impact of technology companies such as Facebook and Google to be when their actions pass into from the digital state to the physical state?

The second wave of the internet brought up questions over the concentration of wealth and the power gained from large digital platforms. Some see the concentration of power that they have developed as being monopolies which the government should break up. New internet companies are pushing the boundaries into the gray area of regulation forcing existing governments to rethink their laws. Gig economy companies such as Uber and DoorDash are taking steps to redefine what is an employee and what it means to work for a company and what benefits an employee should receive. Companies are pushing further and further up against existing governmental and social norms, with hundred of thousands of workers eager to sign up.

Other fundamental rights such as free speech are tested online. Who is responsible for false or hateful speech online? Do we hold our norms of free speech as being sacrosanct online, or is the internet something different whereby one person can have an outsized voice and we society must take precautions to protect people? CDA 230 is the most important law protecting internet speech. CDA 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. In simpler terms what it says is that online platforms such as YouTube, Facebook, and even internet service providers are protected against law that would otherwise hold them legally responsible for content that was published on their platform. This law has allowed for online places to flourish. Reviews can be posted on Yelp, classified ads to Craigslist, and opinions on Twitter. Without such a law, it would be impossible for online services to exist as there would be no way for them to be held accountable for each of their users actions. The United States is seen as a safe haven for websites that need a platform for controversial or political speech. Yet it is this same free speech on online platforms that allow for Americans to exercise their Constitutional rights and underpin democracy.

With millions of reviews, opinions, and user data all hosted in one one location brings another set of problems. Companies are able to give recommendations that are based on other user data. More concerning than this, however, is that companies are able to do surveillance at scale with all the data they have collected. This makes them a target for targeted requests from the government as well as from hackers. This puts companies at great tort risk, but it puts consumers and users at a risk at unprecedented scale across the world. This gets back at the idea of the disproportionately unequal concentration of power in these digital system.

If we agree that we need some level of laws within a society to make it able to operate, providing some guidelines how these laws apply in new areas can lead to a more efficient and less uncertain marketplace. Each of these systems has harm from neglect of systemic safety issues.

Conclusion

In conclusion, each of these case studies has shown that even small choices can have big consequences in how outcomes are shaped. When we design a system with a set of axioms, how the system will grow is an open question. Even if we develop a set of laws, rules, and regulations, the emergent behavior of what is produced is often times not what we expect. Even the most careful parents can produce a child that is far from what they expected.

Many of the major decisions that society is faced with a legal, organizational, and technical dimension. We looked at the example of Vietnam and saw how each of these different factors was connected to the other in unexpected and complex ways. Well-crafted legislation in the context of a complex world with a complex legislative structure is a hard and difficult thing to do and we should understand its limitations. Often decisions aren’t even contained to the confines of our nation. Geopolitics and political economy lurks in the background of nearly every decision and crosses the entire globe and the human race.

This class is an ambitious one that tries to start to answer many of these questions or at least open the discussion that acknowledges its complexity. Complex situations sometimes beget complex solutions, but what this course and these case studies show is that by breaking up issues into more digestible pieces while keeping the political economy, technical limitations, and socio-cultural values in mind, we can develop answers to them. In this class we will be looking at the fast and slow thinking that is needed to understand how to solve these issues regarding regulation, artificial intelligence, and society.

--

--