Class 3: Rationales for Regulating and Institutional Mechanism for Impacting the Use of AI

Data as a Social Mirror. Law as a Social Mirror.

Michael Fischer
Stanford Law: Regulating AI
17 min readOct 22, 2019

--

By Michael Fischer & Shreyas Parab

Like a good translator we need to be versed in multiple languages when thinking about how to regulate AI, being able to switch between languages seamlessly. How do people involved in policy or regulation think about AI safety problems? How do people in technical fields think about policy and ethical problems? Oftentimes, the miscommunication comes when one side speaks their language to the other and fails to try to change how they communicate to their new audience who doesn’t speak their language. Equipped with several languages, we find that it can be difficult to have it all. There will be tradeoffs. Things that are lost in translation.

Before we get into the things that make regulating artificial intelligence tough because of the things are lost in translation, let’s define what is maintained. Law is a social mirror for society. Law is a manifestation of the “way things are” and the “way things should be”. It exists at the superposition of prescriptive and descriptive, where it is both at the same time, but not one at any given moment. We will describe the intersectionality of where law sits a bit later, but experientially we know this to be true. All societies coalesce and determine norms and standards to live by in order to ensure some functionality and harmony across the diverse society. Those norms and standards become codified as time goes on because the systems of rules and the society itself become more complex, so much so to the point that not having it written down and codified would result in chaos. We will expound on law as a social mirror more in depth later on, but before we do, we should foreground how algorithms/data is a social mirror.

Algorithms and data exist as a social mirror. Algorithms have become very, very efficient at demonstrating bias in humans. By themselves, algorithms are mathematical operations and a series of logical steps that make meaning from information. Just as guns don’t kill people, algorithms are not bias by themselves. It is HOW those algorithms are developed, what variables it takes into account, what parameters are conveniently left out, and the optimization matrices of what the algorithm should maximize for. Those things are 100% designed and determined by humans. We decide how to develop these algorithms and it often reinforces the bias that already exists in humans. It is oftentimes computer scientists and technologists do not even realize their algorithms might contain biases until AFTER the results of the algorithm are experienced. It is usually not until someone affected by the algorithm steps up and says, “Hey, what is happening and why is this happening?”. We had the great fortune of hearing from Dorsa Sadigh who is an Associate Professor of Computer Science and of Electrical Engineering at Stanford who drove home just HOW prevalent biases in algorithms and artificial intelligence can be and the extent to which they can IMPACT how society is shaped. Professor Sadigh presented two very compelling case studies of how algorithms and machine learning can be used

Case Study: Word2Vec & Google Translate

Professor Sadigh presented a very compelling case of algorithms reflecting human bias looking at the translations from Google Translate and from a set of related machine learning models called Word2Vec. Word2Vec is a model that reconstructs “linguistic contexts of words” and tries to vectorize them in order to calculate similarities and differences between them. Word2Vec was developed by a researcher at Google and is patented by the company, so is traditionally understood as a mechanism that fuels Google Translate’s new deep learning approach.

Word2Vec can quantify words and essentially perform operations on them and make logical analogies… just like the old SAT used to ask flummoxed high schoolers. For example, it could equate “France — French = Mexico — Spanish” or “King — man = Queen — woman” or “King — man + woman = queen”. It understands the context around France, understands that French is the language of France and then can take those learnings to apply it to how Spanish is the language of Mexico. These algorithms and approaches are revolutionary and can fundamentally change how we understand language. Yet… quickly we start to see problems.

For example, if you were to form a similar set of “mathematical operations” on words that have a heavy implication of gender in American society and their contexts, we see the algorithm somehow pick up on that. “Computer programmer — man + woman = homemaker” or “Doctor — father + mother = nurse” or the one that got the entire classroom laughing: “Feminist — woman + man = conservatism”. Somehow, without being explicitly taught to pick up on gender discrimination, the computer did. Clearly, women can be computer programmers. The first programmers were women. The challenge becomes when those algorithmic biases appear in services like Google Translate, which by the way serves 200 million people across the world daily. Technology operates at a massive scale, and when these algorithms are biased to begin with, it perpetuates and reinforces the social image it created.

It is because the algorithms were given a dataset constructed entirely by text written in humans, it just became so evident and clear to us just how subtle, yet ever present gendered biases are in society. We know that society is flawed, but oftentimes the small biases in day to day decisions are missed and it is only when the algorithms learn them and show them to us that we realize just how bias society is and how it manifests in the smallest, yet most impactful ways in the long run.

Case Study: COMPAS in Criminal Justice

COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, is a tool used by judges to determine the likelihood a defendant becoming a repeat offender. The system was designed using social and psychological constructs that are thought to be predictive of classifying career criminals.

Already there has been a landmark case, Wisconsin v. Loomis, that tests the fairness of such an AI system influencing the results of our judicial process. The case asked whether using artificial intelligence in the sentencing process violates a defendant’s rights to due process because the validity of AI can’t be challenged nor understood. In the case Eric Loomis pleaded guilty to two counts of drive-by shooting. The state of Wisconsin used AI to generate a report that recommended he be imprisoned for six years. While we expect AI to be free from prejudice, AI shares many of the fallacies that humans do. What’s worse is the oftentimes overreliance in the algorithms to provide an answer.

Mr. Loomis appealed the decision to the Wisconsin Supreme Court, which upheld the lower court’s decision (denied the writ of certiorari) that AI is permissible to use as a guide when sentencing. Courts are granted wide discretion when sentencing and absent being able to understand how the technology truly works, we will never know if Mr. Loomis’ was deprived of his legal rights.

Figuring out what is fair, within the context of the COMPAS system, is still a point of active debate. If COMPAS uses five factors to predict the rate at which someone will recommit, for example conviction type, number or priors, age, income, location, race. TO avoid discrimination, we could discard what we refer to as protected attributes, such as gender or race. But then when we look at the data, these attributes be tracked out of the data again through the use of other proxy variables in the system. What we find is that there are too many variables that are linked together at play to be able to determine what is fair. One in the social sciences would refer to this phenomenon as social determinants, in which there is incredibly linkage between variables shaped by the society that the variables exist in.

Lastly, errors in the system might not be equally distributed. For example, the system could hypothetically have an error rate of 5% error in giving someone a sentence that is too long. However, what if this error is only found in one demographic group and not equally throughout a population. How then do we go about measuring if the system is fair to everyone and not just one given population?

The Process of Policy

When making a policy, there is a rule of thumb that you have to balance four things: democratic responsibility, scientific accuracy, technical efficiency, and fair process. With democratic responsibility, you are trying to pass a law that people want. It must be scientifically accurate and feasible. A law that regulates AI that is detached from reality won’t serve a useful purpose to humanity. Laws should be feasible from a technical perspective too. A law that can only be enforced through great effort won’t be as efficient as a simpler and more widely understood law. Lastly, laws should be drafted using a fair process. People are more likely to voluntarily cooperate with a system, whether they win or lose, if they perceive that the process by which the system is structured is fair. When there is a fair process when crafting a law, people will more likely cooperate during the execution phase.

At the Federal level, as well as at the State level, laws in the United States come from English law. Before the American Revolution, American colonies used the English common law system. After the constitution was drafted, American law became distinct from Enlgish law. When we refer to law, we refer to all of law, which is commonly referred to as the “body of law”. The body of law though can be broken down into two types of law: statutory law and common law. A statue is a law that has been passed by the legislative body, any deliberative assembly with the authority to make laws , of a country, state, or local government. A benefit of statutory law is that it is written in one place and can easily be referenced.

This is in contrast to common law. Common law is sometimes known as “unwritten law” because it is never written down explicitly. The reason for this is that when laws are written, it is impossible to anticipate every instance in which they should be applied. For example, when laws regarding wire fraud were drafted, there was no reason for them to explicitly mention computer hacking, because computers had not yet been invented yet. The Constitution allows for laws to evolve using judge-made law. This is referred to as common law or case law. Case law is developed by courts or other tribunal systems. The result of case law can be used as a precedent for future judges to make decisions. Using a process class stare decisis, decision made by judges in previous cases should guide how future cases are decided. However, if a case in front of a court is significantly different than previous cases, the judge has the authority, and duty, to make law by creating precedent for future cases.

Case Study: People v. Ceballos, 1974

In this case, we learned about an individual named Don Ceballos who lived in a rough neighborhood not too far from Stanford, in San Anselmo, California. Ceballos’ garage had been broken into in March and when he noticed another attempt at a break-in on the garage doors in May, he created a booby trap with a loaded .22 caliber pistol connected by a wire to one of the doors aimed at the center of the garage doors.

When the 16-year old boy Stephen opened the garage, he was shot in the face with a bullet from the pistol. Ceballos claimed that he was protecting his property from burglary and that he had every right to protect his “castle” (See Castle Doctrine). The Supreme Court of California upheld the ruling made by the lower courts that found Ceballos guilty.

This case had been heard several times through the appeals process before making it all the way to the Supreme Court because of its monumental significance. The Ceballos case is a perfect example of what is called “adjudication” where a final decision is rendered on a case and all the evidence and arguments are reviewed by the high courts. In the adjudication process, there are several factors that go into the court’s decision on a specific dispute.

The courts must consider four key parts: statutes, precedent, impact on the present parties, and the broader impact it has. This is important to understand in the common law system we have here in the United States where a majority of legal cases are decided based on the historical decisions and logical reasonings of the court.

With all of these factors, one can imagine the monumental task at hand. It also goes to show how the framing of cases can affect how we understand the case in the larger schema of legal history. We examined how this booby trap functioned as an autonomous weapon that Ceballos had very little control of, yet was held responsible for. Although we imagine autonomous weapons as high-tech, complicated autonoma used in secret projects by the government, indeed Ceballos homemade trap was an autonomous weapon: a weapon that would directly act on causing bodily harm to an individual without any human interference.

When a machine decides for the human, who is responsible? Who can the courts point to? As the saying goes, guns don’t kill people… people kill people. In this case, we ask ourselves, well, did a person technically kill another if it wasn’t him who pulled the trigger? These are broad questions with broad implications, so the courts try to narrow the playing field by focusing on the intersection of law, ethics, social values, and politics.

The courts decided that Ceballos was responsible for the injury caused to the burglar for a multitude of reasons, but let’s dissect two key ones. Firstly, the historical precedent set by courts have concluded that a person can be liable for a crime if they create a deadly mechanical device that kills or injures another. In the opinion, the judge cites dozens of cases where this is the case. We list them here to just show that indeed one question being asked in California in 1974 might have been asked in other local, regional courts over that same century and somehow they are all tied together in this common law system (Katko v. Briney (Iowa), State v. Plumlee (Louisiana), State v. Beckham (Missouri), State v. Childers (Ohio), Marquis v. Benfer (Texas), Pierce v. Commonwealth (Virgina)).

Secondly, the use of unnecessary force that Ceballos exhibited towards the burglar. Ceballos was not present at the time of the automatic shooting and was not under any duress in which the intrusion threatened death or serious bodily injury. Essentially the court argued that the “defend your castle” argument fell short because Ceballos reacted with more force than necessary because he was not himself in danger. If Ceballos had made a more calibrated decision in defending his property (i.e simply notifying the police of the presence or capturing video footage of the burglar) based on the factors of the situation, perhaps he would not have been culpable the way he was.

Although those were the exact facts of the case, the opinion (which expressed was agreed upon unanimously by the court) has to, by nature of the common law system, offer some framework and rationale behind how they reached that decision. Oftentimes these frameworks and rationales that might not contain facts related to the case, but to society and existing political systems takes place in opinions in the form of dicta. In this case, the court opined that,

“Allowing persons, at their own risk, to employ deadly mechanical devices imperils the lives of children, firemen and policemen acting within the scope of their employment, and others. Where the actor is present, there is always the possibility he will realize that deadly force is not necessary, but deadly mechanical devices are without mercy or discretion. Such devices “are silent instrumentalities of death. They deal death and destruction to the innocent as well as the criminal intruder without the slightest warning. The taking of human life [or infliction of great bodily injury] by such means is brutally savage and inhuman.” -Opinion from Burke, J. expressing unanimous opinion of courts

The courts ruled that autonomous weapons were “silent instrumentalities of death” because they are “without mercy or discretion.’’ AI, however, is rapidly changing that paradigm. It seems the current American legal system has a strong opinion on what the attitude towards “automatic” weapons are in the sense that they act automatically without human intervention, but we lack clear answers on what to do when those weapons develop “discretion” that can be objectively better than a human’s.

Law Isn’t a Silver Bullet, It is Part of a Larger Intersectionality

Theory of Intersectionality

Throughout the course already, it has been emphasized that law is not solely about the words written down on paper that define our statutes, but about the larger political economy at play that includes politics, policy, ethics, social values, and emerging ideas (including technological).

Law intakes various “data” sources, weights them slightly differently each time, and then outputs decisions that help inform and shape those weights, as well as, build stronger feedback loops of what is “working” and “not working”. Simply put, law resembles artificial intelligence more so than we think. That is intentional, law resembles artificial intelligence and artificial intelligence helps model decision-making and systems.

It seems that computers, as a result of artificial intelligence, have become incredibly good at intaking various information, valuations of that information, and making decisions. “Law is a mechanism for resolving disputes and assigning meaning to what people do and why they do it — a kind of operating system for people and organizations” (Slide 7 from Class 3). Thus the intersection of law and artificial intelligence necessitates not only a different kind of lens to understand and execute upon properly, it IS a different ball game altogether.

Cases like Ceballos will only get more complex with the advent of artificial intelligence and will become even more exigent as the technology that fuels facial recognition in home devices like Nest, which have turned each individual American household into essentially a self-policed state. Right now, the technology identifies and notifies a human, but what starts to happen as we want to develop the technology that not only identifies, notifies, but also takes action itself?

That’s where the regulation of the intersection of artificial intelligence and technology will have to come in. If we do nothing, the intersection of law and AI will still be regulated by existing law.

The United States currently uses a different mechanism for regulating complex legal and technical issues. By count of laws, administrative law is one of the largest bodies of law in the United States. The Federal Government is complicated, but to give a quick overview it is composed of congress (to make laws), the executive (to enforce laws), and the courts (to interpret laws). Congress doesn’t want to get involved in making all the laws — specifically laws that require a lot of domain expertise. Congress delegates power by creating administrative agencies. For example, the Security and Exchange Commission (SEC) is an administrative agency created by Congress to handle complex laws around financial assets. Other Administrative Agencies include the FCC (Federal Communication Commission), EPA (Environmental Protection Agency), DOJ (Department of Justice), NSA (National Security Agency), FBI (Federal Bureau of Investigation), FTC (Federal Trade Commission), DOE (Department of Energy), NIST (National Institute of Standards and Technology), and many many others.

Collectively administrative agencies employee millions of government officials and non-government support workers. While agencies are created by Congress, they are operated by the Executive. Agencies are led by someone that is appointed by the President and confirmed by Congress. With a politically appointed leader, agencies create laws that are in line with the views of the executive, which is a product of the will of the people. While the President can appoint most leaders of an agency, there are also independent agencies, which are more independent of presidential control. As a rule of thumb, independent agencies have the name “board” or “commision” in the name, such as the FCC (Federal Communications Commission).

Federal agency power has come about gradually to accommodate societal changes. After the New Deal, the role of the federal government expanded beyond the traditional role of the Federal government. It was a hard thought process to come up with these agencies and to determine how much power that they should have.

While agencies are not perfect, they are useful within the legal system to bring certainty to uncertain legal questions. Let us take for example how law and self-driving cars intersect. While agencies create law, society would not be lawless without them. Common law still would apply. It is the default form of regulation without our government if no other form of law applies. Law facilitates regulation, and how we govern, but does not always work so well, which is why we have administrative agencies.

As an example, of where regulation can fail, we can look at global warming. While we have known about global warming since the 1800’s, regulation does not always work, yet we did nothing. Even worse, that most greenhouse gasses have come about within the past 30 years. One of the reasons that we were not able to regulate greenhouses gasses is because of how complicated the process was and how many parties and legal systems were involved. Thus, while regulation can be beneficial, it is only beneficial if we can collectively agree on it.

The flipside is that regulation when enacted can also be harmful. In response to the corporate accounting scandals and Enron and WorldCom, U.S. Senator Paul Sarbanes and U.S. Representative Michael G. Oxley introduced the Sarbanes–Oxley Act. The act makes it so that there are additional requirements for public companies. While on the surface these are good, Sarbanes–Oxley increases the cost of companies going public. Now, it is too expensive for smaller companies to reach public markets. As a result, the number of IPOs has significantly been reduced. Now it is hard for retail investors to get access to invest in companies, and big-money investors are reaping the benefit of being able to invest in these companies. So while regulation can be good, it can also have downsides. In this example, many have argued that the increases overhead costs associated with this act are not worth the downsides. How much does the regulation actually prevent fraud from people that are determined to commit it. And how many innocent people get caught up in the associated costs of complying regulation. With regulation comes increased costs and overhead for people.

The other reason though is that regulation comes at a cost to someone else. For each decision that we make, we make a set of complex choices that carry burdens for someone else. In the example of global warming, what percentage of greenhouse gas production is society willing to give up for a loss in GDP?

Lastly, no law will implement itself automatically. To enact a law requires someone to carry the cost of explaining to the winners and losers why they enacted a certain law. With each of these explanations, they will lose popularity in the short term for benefits they will not reap.

Without agencies, legal questions will be solved by other law. There are still laws that apply but they won’t be as clear cut. Agencies make decision making solve the problem if it is easier to have one agency that can solve a specific problem such as self-driving cars, as opposed to relying on torts and criminal law. Before administrative agencies, we relied more on local governments and social norms.

The need for a multi-disciplinary approach becomes even more and more apparent as we start to think about how we discuss, legislate, and enforce artificial intelligence. AI merits a type of “regulatory thinking” where we consider all the factors at play including political economy and the intersectionality of where law sits in society. If we are forced to speak a new language altogether to understand regulating artificial intelligence, perhaps we get the opportunity and the burden to redefine what regulation means in order to understand these far more complex trade-off systems in place. Right now, the default way of law is common law and as we start to think of the larger consequences of distributional consequences, it is important to understand the supersonic pace and scale of technology.

If we look at how much human DNA has changed over the past 500 years it is inconsequential. However, it we look at the legal protections that humans have developed over the last 500 years, it is enormous. Then we look at the how much AI code has changed in the last 10 years. It has changed 100 fold. Yet the legal ideas and protections around AI have not changed at all.

Technology moves so fast that when we experience the consequences of it, it has already gained such widespread adoption and scale that it becomes much harder to contain. We are trying to use an imperfect system to govern an imperfect system, but sometimes learning from our mistakes can be hard and slow to adapt to. Fortunately, artificial intelligence is really good at learning from itself or other systems it is taught. Unfortunately for us, in order to become really good, artificial intelligence needs to make a lot of mistakes in the beginning in order to learn from them. The question that might keep us awake is whether our institutions and societies can survive those mistakes before the light at the end of the tunnel.

--

--