Chapter 5: Administrative Law and AI

How the Federal Government is Using AI to Govern

Michael Fischer
Stanford Law: Regulating AI
28 min readApr 23, 2020

--

By Mikey Fischer and Shreyas Parab

In the United States, Congress makes around 300 laws a year. But there is an entire set of laws that Congress does not directly make, but leave up to federal agencies to develop themselves. Most people though don’t know as much about this much larger, expansive part of public law. It is called the administrative state and this regulatory mechanism leads to 10,000 new or revised laws a year. The administrative state is comprised of many agencies that you have probably heard of including the SEC, FDA, EPA, and dozens more.

The federal government is complicated, but to give a quick overview it is composed of Congress, to make laws, the executive, to enforce laws, and the courts to interpret laws. However, Congress doesn’t want to get involved in making all the laws, specifically laws that require a lot of domain expertise. Congress wields the power to create administrative agencies in the Administrative Procedure Act and allows agencies to legislate, adjudicate, and enforce restrictions. For example, the Security and Exchange Commission (SEC) is an administrative agency created by Congress to handle complex laws around financial assets. Other Administrative Agencies include the FCC (Federal Communication Commission), EPA (Environmental Protection Agency), DOJ (Department of Justice), NSA (National Security Agency), FBI (Federal Bureau of Investigation), FTC (Federal Trade Commission), DOE (Department of Energy), NIST (National Institute of Standards and Technology), and many many others. Collectively administrative agencies employee millions of government officials and non-government support workers.

While agencies are created by Congress, they are operated by the Executive branch. Agencies are led by someone that is appointed by the President and confirmed by Congress. With a politically appointed leader, agencies create laws that are in line with the views of the executive, which is a product of the will of the people. While the President can appoint most leaders of an agency, there are also independent agencies, which are more independent of presidential control. As a rule of thumb, independent agencies have the name “board” or “commision” in the name, such as the FCC (Federal Communications Commission).

One of the primary principals of the Constitution is the separation of powers. Each branch of government has certain powers and these powers are limited and checked by the other branches. However, Agencies are able to make, enforce, and interpret regulations with the force of law. Each of these functions, while typically delegated to a different branch of government, all sit under the same roof of an administrative agency. Some people have raised concerns of a muddingling of powers that are typically separated and maintained with checks and balances.

Agencies are subject to the courts and congress. However, they get wiggle room on each side. On the courts’ side, since Congress grants broad discretion to regulatory agencies, courts don’t want to interfere with the ability of the Executive to administer the law (see Chevron v. NRDC, which “compels federal courts to defer to a federal agency’s interpretation of an ambiguous or unclear statute”). On the congressional side, agencies have wiggle room to interpret the statues that they were given by Congress (see Heckler v. Chaney, which finds that administrative agencies have discretion in which enforcement actions to take). In effect, many of the areas of Administrative Law are microcosms of the entire government with the ability to create laws, enforce laws, and adjudicate laws.

The point of all this is that administrative agencies get a lot of autonomy in how they operate. Each is a microcosm of a government. Each operating independently of others. By having them independent, they can be laboratories of innovation in democracy. A common experiment in these agencies is how they can use artificial intelligence to transform their practices to meet the many evolving needs of the 21st century American public.

Why Do Administrative Agencies Use AI?

When a private company uses AI, the motive is usually clear: to improve their bottom line. AI has the ability to significantly reduce operational costs and operate more effectively than humans at specific tasks. When the government uses AI, however, we often see a different kind of incentive structure.

Oftentimes the use of new, cutting-edge technology is necessitated for defense purposes. National security risk is often the best catalyst for the government to implement any new technology or fund its research and just as government spending spurred the creation of the earliest computers, government spending also spurs the creation and implementation of AI into government agencies.

Another reason for the implementation of AI is the operational advantages it could offer the government. There is a noticeable gap between the resources the government has and the demand for service the government offers. Administering Social Security across the country or issuing documentation for the entire population is quite difficult and AI offers much more scalable solutions than continuing to dig deeper into the pockets of the US budget to scale operations. This scalability is paired with an enhancement of the reliability of these government capabilities, meaning that agencies can now do more by not having to worry about smaller, low-level automable tasks that consume their time. So, in a way, those incentives are similar to that of a private sector company. Finally, and perhaps the most “dirty” is the political pressure on the government to modernize to the times. Government is notorious for its trouble in adapting to the modern technology ubiquitous in everyday American life. By implementing some technology and being able to claim that they have implemented cutting-edge technology into their systems, the government counters the narrative that the government isn’t modern to the available technology.

The founding principles of administrative law are built on transparency and accessibility to the average citizens. Due process is a cornerstone of American democracy and one that differentiates American democracy from other so-called democracies across the world. Due process is a series of steps that involve the individual citizen “ into the fold” of government operations and procedures. The ways the government goes about administrative law is primarily contained in the Administrative Procedure Act which emphasizes and necessitates “due process” in the activities of the agency.

Case Study: Notice and Comment

The heart of administrative law lies in a section of the US Code called the Administrative Procedure Act ( 5 USC §551 et seq.). The APA lays out the method in which the federal agencies may propose and establish regulations that will be enforced. One key feature of the APA is the “Notice and Comment” procedure that mandates regulatory agencies to publish the proposed rules and review “submission of written data, views or arguments” . This is an essential, yet sometimes messy part about democracy. It is intuitive that the public should get to weigh more directly on the regulation that affects them, but on an issue that affects a large group of individuals like net neutrality, the government could see tens of thousands of comments.

In fact, the Pew Research Center analyzed the over 21.7 million comments submitted to the FCC during the official comment period and they found that only 6% of online comments were unique. Most of the other comments that were submitted were repeats of each other, sometimes hundreds of thousands of times. The seven most-submitted comments made up 38% of all the submissions. The abuse of the public notice and comment period is not just limited to net neutrality, a WSJ investigation found that about 41% of 19,000 survey respondents never actually submitted comments even if their contact information was the one associated with the comment sent to agencies like the Department of Labor, the Consumer Financial Protection Bureau, and the Federal Energy Regulatory Commission. This means that impersonators used these real identities of thousands of people without their knowledge and potentially could have swayed regulation. In our election process, we know just how detrimental this interference can be to our democratic values. A key challenge facing these agencies is making sure that each real citizen’s voice is heard through all the noise, but they also must consider the hard administrative cost of going through all these comments. This is perhaps where AI could come into play, by ranking and assessing the validity of comments, eliminating repeats, and more effectively triaging them using natural language processing.

Case Study: Business Roundtable v. SEC

When determining if a rule should be enacted, it must be supported by a reasoned explanation otherwise it will be rejected by a higher court. In Business Roundtable v. SEC, the SEC came up with a new way for shareholders to be nominated people for board positions of public companies. Business Roundtable, which represents a variety of businesses did not like this new rule because they thought it would negatively impact their ability to elect officials. Before implementing a rule, the SEC is required to study the economic consequence of a proposed regulation before it is finally on the books. The court found in this ruling that because economic factors were not considered, the law was “arbitrary-or-capricious”. Because the law did not follow the needed steps, it was created “arbitrarily” and was thus invalidated. The upshot of this is that because the SEC did not follow the rules when making the law in determining the economic impact that the law would have, the law could be invalidated.

Case Study: Heckler v. Chaney

In this Supreme Court Case, a group of prisoners in Oklahoma and Texas who had been sentenced to death argued that the lethal injection drug being used to kill them was not approved by the FDA for that specific use. Essentially they asserted that the FDA certified those drugs, but not for the specific purpose of execution. The FDA refused to undertake actions on this complaint by the prisoners which the prisoners considered a violation of the duties of the federal agency. The Supreme Court ruled that under the Administrative Procedure Act, the decision by the FDA could not be reviewed by the judicial system. This granted more freedom to the federal agencies which get more flexibility in their decisions as they are not under the same standards of judicial review as the other parts of the government.

How Do Administrative Agencies Use AI?

Now that we know WHY government agencies might use AI, it is important to see how they are found in the government currently and where the future uses of the technology will be. Let’s first focus on looking at what the government agencies are using AI for. Professor David Engstrom of Stanford University Law School’s research essentially canvassed the federal government agencies to uncover the usage of AI. The research found that over 26% of use cases of AI in federal agencies involved prioritizing enforcement of law. The next most common use cases was in monitoring (20.9%), regulatory research (18.4%), and prioritizing work assignments (17%). Although these categories are broad, we started to see how much of the focus of AI in government agencies somehow surrounded security and protection. This makes sense given just how effective of a catalyst security can be to get things done in the government. Given the defense budget of the United States is roughly the same as the next seven countries combined.

Surprisingly enough, however, the federal agencies that were using AI across a variety of use cases were NASA, the Securities Exchange Commission, and the Social Security Administration. Traditionally three agencies not associated with security nor protection, but many of their automated tasks spanned across the agency resulting in a higher volume of usage.

For example, one of the most common use cases across federal agencies is fraud detection. The IRS uses AI in detecting fraudulent tax returns where perpetrators use fake identities stolen from those who were incarcerated or deceased. The Health and Human Services use AI in detecting fraudulent insurance claims by providers and patients who try to take advantage of the massive amounts of paperwork the office deals with to try sneaking in additional claims. These use cases are pretty common across the industry and not just in federal agencies, but also in the private sector. Many of these solutions and integrations are provided by third parties that also provide their services to private entities.

In this section, we will dig a bit deeper into specific unique use cases of artificial intelligence to individual federal agencies. We will try to understand what they have been and why they have been so transformative not only to their workflow (efficiency) but to also their performance (effectiveness).

Case Study: Social Security Administration

The Social Security Administration is responsible for administering the Social Security and Disability Insurance that covers all workers between 18 and 65 who have participated in the program. The SSA is responsible for settling all claims and filings of unemployment or disability across the nation and thus have to “adjudicate” every single claim. Adjudication is the legal process by which a judge reviews pertinent evidence and arguments made by those filing the claim to ensure that the claims are valid and fit the necessary criteria. The SSA is largely considered the “largest adjudication agency in the western world.” They are tasked with reviewing north of 2.5 million disability claims, of which almost ¼ of those cases are being appealed towards an in-person hearing. In some cases, the wait time to adjudicate a claim can take anywhere between a couple months to over two years.

Not only were there problems surrounding the efficiency of adjudicating claims, there were many questions surrounding the fairness associated with these trials. In some cases, certain judges were awarding the claims 90% of the time while others were only doing so 10% of the time. They recognized several areas of potential use of artificial intelligence that both eased the load on the adjudication system and the fairness of the process.

The SSA has been able to build predictive models such that those applying who are most likely to qualify based on a variety of factors could be processed almost immediately. The ability to predict cases that were for the most part going to end up one specific way anyway regardless of the human in the loop hearing significantly reduced the burden on the system.

In another way, the SSA has been able to use NLP to make sure that all the decisions or rulings that an SSA judge would make would have properly met the analysis and due diligence required by regulations. Essentially, the software analyzes a draft decision and identifies 30 key factors that are required by regulation to be included in an appeals decision or sentences that contradict previous decisions and without would suggest policy noncompliance or internal inconsistencies. Since August 2017, these tools have been used over 200,000 times and recent testimony in front of Congress strongly encourages this expanded use of NLP to review adjudication decisions and detect biases and inconsistencies.

Case Study: EPA Regulation Enforcement

The Environmental Protection Agency is tasked with protecting America’s environment and curbing impacts of global warming on American life. One way they protect the nation is by regulating large polluters of the environment such as concentrated animal feeding operations (CAFOs). These CAFOs are agricultural operations that have a critical mass of animals that produce large amounts of waste materials that can potentially come into contact with the water supply. Unfortunately, the EPA has no good way of locating these massive polluters. In 2019, researchers at Stanford University discovered a way to use deep learning to efficiently identify CAFOs so the EPA can appropriately intervene. At the time, no federal agency had reliable, verified information on the number, size, and location of farms that could potentially be CAFOs. By leveraging publicly available datasets of agricultural imagery, they were able to train a model to identify facilities that had a large heat signature or had large containment facilities. Their algorithm was able to successfully identify 15% more poultry farms than what was originally thought as CAFOs. Even more so, the model was able to successfully detect 93% of CAFOs in the area. With these kinds of tools, the EPA could better identify CAFOs not only significantly quicker, but also with more accuracy. This method of identifying CAFOs was far more capital efficient than manually sending investigators to each potential CAFO and trying to do an audit of the operation. There seems to be many more applications of computer vision analyzing satellite imagery that the EPA can expand to in the near future.

Although the EPA was able to do this with higher effectiveness and efficiency, one key trade-off was the privacy concerns that this created for farmers. Although these satellite images were publicly available, questions around consent of farmers to allow for their farms to be monitored were raised. These privacy concerns were well-founded, but had yet to see specific legal action taken against this technology or practice. Surely, in the future, we can expect to see many interesting legal questions surrounding privacy and compliance regulation.

Case Study: SEC Using AI to Prioritize Enforcement

One of the roles of these federal agencies is to enforce regulation and make sure that entities are complying with this regulation, so it makes sense that one federal agency particularly relying on the prioritization of enforcement is the Securities Exchange Commission.

The mission of the Securities and Exchange Commission (SEC) is to “protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation.” To achieve these regulatory objectives, the SEC issues rules governing securities exchanges, securities brokers and dealers, investment advisors, and mutual funds. The SEC not only has the authority to issue rules under various of the federal securities laws but can also bring enforcement actions against those who violate them. The SEC brings hundreds such enforcement actions each year. The SEC’s wide-ranging regulatory and enforcement duties are reflected in its structure and organization. The Commission is headed by five Presidentially-appointed Commissioners, one of whom serves as chairperson. The Commission is further organized into five divisions and several standalone offices.

The Securities Exchange Commission is the federal agency responsible for proposing new rules in securities (i.e stock exchange, capital markets, etc) that create a fair marketplace for all the players. They receive massive amounts of data in the form of transaction history, market movement, and thousands of reports of non-compliance across the industry and have to find some way to properly address the case to make sure that it is appropriately handled.

Cases take a lot of time and money for the SEC to investigate and pursue. With the limited resources, it would be impossible to expect that an agency can go after every violation, as described in Heckler v. Chaney, which we will talk about later in the chapter. Administrative agencies, such as the SEC, have thus turned toward artificial intelligence to help parse through data and flag particularly high-risk violations.

To detect fraud in accounting and financial reporting, the SEC has developed the Corporate Issuer Risk Assessment (CIRA). CIRA is a dashboard of some 200 metrics that are used to detect anomalous patterns in financial reporting of corporate issuers of securities. Today, there are over 7,000 corporate issuers who must submit financial statements, such as annual 10-K and quarterly 10-Q forms, to the SEC for oversight. These reports can be hundreds of pages long, containing general business information, risk factors, financial data, and so-called MD&As (Management’s Discussion and Analysis of Financial Condition and Results of Operations).

Analyzing this immense body of reports is a resource-intensive process, and, as with any agency, the SEC has limited resources with which to do it. CIRA’s goal is to help the agency more efficiently utilize its finite resources by identifying corporate filers that warrant further investigation. One way SEC staff have sought to manage large data flows is through use of a machine learning tool to identify people that might be engaged in suspect earnings. The tool is trained on a historical dataset of past issuer filings and uses a random forest model to predict possible misconduct using indicators such as earnings restatements and past enforcement actions.

Enforcement staff scrutinize the results, thus maintaining a human eye, and consider them alongside a range of other metrics and materials. Though the algorithmic outputs are only part of a broader analysis, SEC staff report that CIRA’s algorithmic component improves the allocation of scarce enforcement resources.

The SEC is hardly alone in leveraging AI to perform enforcement-related tasks. The Internal Revenue Service (IRS) and the Centers for Medicare and Medicaid Services (CMS) have deployed algorithmic tools designed to predict illegal conduct and more precisely allocate scarce agency resources toward audit or investigation. The IRS, for instance, has responded to budget and workforce cuts by investing over $400 million to develop and operate a fraud detection algorithm, the Return Review Program (RRP), that generates fraud risk scores for all national individual tax returns claiming a refund.

This steadily growing catalog of algorithmic enforcement tools holds significant implications for the future of regulatory governance.

Making Sure that Algorithm Are Accountable To Humans

The proliferation of algorithmic enforcement tools at the SEC and beyond highlights especially difficult trade-offs between the efficacy of the new tools and the accountability concerns that animate administrative law.

An important debate asks how much transparency, from high-level explanations of how a tool works all the way to open-sourcing a tool and the data it uses, is necessary to gauge a tool’s fidelity to governing law.

One of the tradeoffs that many systems have to make is between explainability and accuracy. As a system becomes more explainable, it often comes at the cost of being able to give more accurate results. So is it worth it to have a tool that performs worse for the benefit of our own understanding of how it works?

A critical question is whether continued uptake of algorithmic tools by enforcement agencies will, on net, render enforcement decisions more or less accountable. On the one hand, the black box nature of machine learning tools may exacerbate accountability concerns. On the other hand, algorithmic enforcement tools can, by formalizing and making explicit agency priorities, render an agency’s enforcement decision-making more tractable compared to the dispersed human judgments of agency enforcement staff. Algorithmic enforcement tools might thus provide a “focal point” for judicial review, undermining the normative foundation of long standing legal doctrines, embodied by the Supreme Court’s Heckler v. Chaney decision, hiving off agency enforcement decision-making from judicial review. Algorithmic enforcement tools, by encoding legal principles and agency policies and priorities, might also qualify as “legislative rules” under the Administrative Procedure Act and thus require full ventilation via notice and comment.

The result, though it runs contrary to much contemporary commentary, is that displacement of agency enforcement discretion by algorithmic tools may, on net, produce an enforcement apparatus that is more transparent, whether to reviewing courts or to the agency officials who must supervise enforcement staff.

But legal demands of transparency also produce further trade-offs in the enforcement context because of the risk that public disclosure of a tool’s details will expose it to gaming and “adversarial learning” by regulated parties. An SEC registrant with knowledge of the workings of the SEC’s Form ADV Fraud Predictor could adversarially craft its disclosures, including or omitting key language in order to foil the system’s classifier.

A key line of inquiry in the enforcement area will be what degree of transparency, and what set of oversight and regulatory mechanisms, can reach a sensible accommodation of interlocking concerns about efficacy, accountability, and gaming.

Algorithmic enforcement tools may also, in time, work a fundamental change in the structure and legitimacy of the administrative state. Algorithmic enforcement tools are force-multipliers that allow an agency to do more with less by permitting agencies to identify regulatory targets more efficiently. In this sense, the advent of algorithmic enforcement tools could halt or even reverse the decadeslong shift away from public enforcement and toward private litigation as a regulatory mode.

The advent of algorithmic enforcement may also supplant expertise within the federal bureaucracy, exacerbating a perceived trend toward politicized federal administration and the hollowing out of the administrative state. This is especially worrying because, at least for the moment, line-level enforcers appear to play a key role in bolstering the accountability of new algorithmic tools. Because SEC enforcement staff can choose whether to use algorithmic enforcement tools, agency technologists must sell skeptical line-level staff on their value. SEC technologists report that line-level enforcement staff are often unmoved by a model’s sparse classification of an investment advisor, based on dozens of pages of disclosures, as “high risk.”

They want to know which part of the disclosures triggered the classification and why. This is pressing agency technologists to focus on explainability in building their models by taking account of frontier research on how to isolate which data features in an AI system may be driving an algorithmic output. Staff skepticism and demand for explainable outputs raise the possibility that governance of public sector algorithmic tools will at times come from “internal” due process, not the judge-enforced, external variety.

Finally, as algorithmic tools move closer to the core of the state’s coercive power, they may systematically shift patterns of state action in ways that raise distributive and, ultimately, political anxieties about a newly digitized public sector. As already noted, gaming reduces the efficacy of algorithmic systems and risks rendering their outputs fully arbitrary. But gaming is also likely to have a distributive cast, particularly in the enforcement context.

The predictions of the SEC’s Form ADV Fraud Predictor as to which investment brokers are likely to be the bad apples may fall more heavily on smaller investment firms that, unlike Goldman Sachs, lack a stable of computer scientists who can reverse-engineer the SEC’s system and work to keep their personnel out of the agency’s cross-hairs. A narrow focus on technical and capacity-building challenges misses the profound political implications of the current algorithmic moment.

As the SEC’s experience illustrates, AI/ML tools have the potential to help enforcement agencies flag potential violations of the law and focus agency attention in a world of scarce resources. This improved accuracy and efficiency may come at a cost, however. As AI/ML tools get ever more sophisticated, they also pose real threats to the transparency and democratic accountability of enforcement agencies and of the regulatory state as we know it.

Constraints and Challenges of Government Use of AI

So why do governmental agencies struggle with the use of artificial intelligence? The most pressing answer is that many people in government are not technologists by training nor are all agencies staffed with the most cutting-edge technology due to budget constraints. The lack of understanding by lawmakers around technology and its implications was most evident during the Facebook hearings where the American people quickly saw just how out of touch with technology lawmakers might be. Fortunately, most of the experts in their fields reside in the federal agencies that we discussed above and not necessarily in Congress.

But, why don’t federal agencies, even those who have the technical talent and domain knowledge on how to use artificial intelligence experience the radical transformation that entities in the private sector have? The main obstacles surrounding government from totally embracing artificial intelligence definitely includes the expected bureaucracy concerns that result in slow moving work, but also in large part because of the expectations of government. Government is expected to be transparent and fair to all those involved meaning that every decision comes with public scrutiny and accountability. Even more so, when those in government violate the will of the people, Americans can rely on fair and free elections to oust those who no longer rule according to their will. Unfortunately, artificial intelligence has very little transparency or explainability and lacks the accountability that the government is expected to provide. There have been several court cases that try to address this very question and surround the legal concept of “due process”.

Implication of case studies and overview of all federal agency use

From these case studies we can learn a lot about the implications of AI on Administrative Law. Administrative Law insulates government decisions from review by traditional courts. By the rule of the Administrative Procedures Act (APA), it is hard to challenge an administrative court’s ruling in the federal judicial court.

The reason for this is that judiciary judges, the normal judges most people are familiar with, are broad judges that have to make decisions over all parts of law. Administrative judges on the other hand are confined to certain areas of law that they become an expert in. For example, we do not expect a federal judge to have the same domain knowledge as an administrative judge in the Food and Drug Administration when determining cases about food safety laws.

Due to practical considerations and budgetary constraints, federal courts are wary of getting involved with the decisions of administrative judges and second guessing their rulings. Imagine if every time an administrative judge were to make a decision it then had to go to a “less qualified” federal judge. It would be burdensome to have a less qualified judge overrule a lower judge and would double the costs. In law there is something called finality. It is the idea that there must be an end to conflicts eventually and ideally the finality does not have to reach the highest court in the land, otherwise overwhelming the Supreme Court which only sees 70 cases a year.

Explainability, Due Process, and Equal Protections

When a decision is made by an administrative agency (or a court), how does society resolve the tension between fairness, responsiveness, making sure there are limited errors, and varying levels of expertise. Is there an overriding doctrine which could make this possible?

One way is by having all decisions that are made by the government have an accompanying explanation of why that decision was made. The United States has developed systems to give explanations of why a decision was made so that people can know if there was an error made while in the decision making process. There are systems needed for determining when an explanation needs to be given.

Giving an explanation of why rights are restricted by the government is costly but a needed part of due process. Explanation gives a reason for why actions are taken. Explanation ensures that decisions are not arbitrary and that they have a basis in reasoning. Explanation allows those impacted by decisions to challenge parts of the decision that they see as unfair or done improperly.

There is a balance that has to be played here. Some explanations allow for avoiding arbitrary decisions and corruption to happen within the government. But on the other hand, if we prioritize explanation too much, nothing will ever get done. For example, having one person flip a coin, or having a biased king make decisions would be efficient, but it does not comport with our notion of fairness.

Due Process

When the government needs to remove a protected interest (life, liberty, or property) from someone, they are allowed to do so but only if it is done legally. There is a feeling that when removing one of these interests the process should above all be fair.

Within the United States we determine when an explanation is needed by the rules of due process. Explanation is a core part of the due process in which a judge is required to give a written or oral explanation of how they came to their decision. The same is true for administrative rule making agencies.

Within administrative agencies (like the FDA, SEC), due process also applies. Administrative rulemaking requires that an agency respond to comments from people on all of their processed rules in a process known as promulgation. Administrative adjudication als must provide reasons for their decisions in case they are ever subject to judicial review.

Explanations are not always required. Decisions that are a product of the will of the people typically do not require explanations. Democracy is the ultimate form of legitimacy; explanations are not needed as justification. In addition, they would be difficult to craft. For example, in a jury of one’s peers, an explanation of the decision is not required considering the trial itself allows for a fair examination of the facts and testimony of the case.

Procedural Due Process

The 5th and 14th Amendments describe the process for which these rights can be curtailed. From these two mentions, there are four commonly understood parts of due process.

The first is procedural due process which ensures that the adjudication of laws are fair and valid. If you are part of a legal proceeding you must be notified when and where the proceeding will be held, you have the right to an impartial person to determine the facts of the case (police, jury, judge), an impartial person to establish law (appeal court), the right to give testimony, the right to give evidence, to as well as any other information that would help you to prepare.

One of the most significant developments in administrative law as it relates to procedural due process was the Supreme Court Case Mathews v. Eldridge. In this case, a man named George Eldridge had his Social Security benefits terminated by the Social Security Administration. Eldridge did not have an opportunity to argue or fight back against the claim that he was not eligible for the continuation of these benefits. He sued the Social Security Administration saying that they did not give him the fair due process before terminating the benefits. In fact, the Social Security Administration had procedures in place such that Eldridge had received ample notification period and could have received an evidentiary hearing before a final determination could be made, but they were not willing to continue disbursing his benefits until such hearing took place. Although the district court and the court of appeals both concurred that the termination of benefits before the hearing was unconstitutional, the Supreme Court reversed the decision. In a 6-to-2 decision, the Court contended that the Social Security Administration did follow due process in the termination of Eldridge’s benefits without a hearing because due process was not fixed nor defined, but rather “flexible” and called for “such procedural protections as the particular situation demands”. Although this potentially would open the floodgates for more cases like this one, cases that required the highest clarification of due process for “the particular situation”, the Court asserted that “at some point the benefit or an additional safeguard to the individual affected by the administration and to society, in terms of increased assurance that the action is just, may be outweighed by the cost.” This added some constraint to due process to limit the possible number of safeguards that is required by the government. Of course the arguments as to what that marginal benefit of a safeguard to justice compared to the marginal cost will continue to be open for debate, in Mathews v. Eldridge the courts firmly ruled in favor of a degree of limitation in procedural due process to make it logistically and financially a tractable problem for the government to address.

The second, doctrine that is covered is substantive due process. While the process can be fair, who says that the laws are fair? Substantive due process limits the power of the government to create laws. There are certain rights upon which the government may not take away, such as freedom of expression and freedom of association. Substantive rights protect the rights of being human, life, liberty, happiness, as opposed to the procedure to enforce that right.

The third part of due process is that laws can’t be too vague. If laws are too vague for the average person cannot determine who is regulated, and what is and is not allowed, what the punishment may be, courts can find a law to be void because it is too vague. In one case, Coates v. City of Cincinnati, the court held that a local city ordinance that banned three or more people assembling on a sidewalk and “annoying” people walking by was unconstitutionally vague.

Lastly due process has a final somewhat archaic function to apply the Bill of Rights to the states. Originally, the Bill of Rights only applied to the federal government, but have later been extended to limit the power of state and local governments.

Administrative agencies are ideally supposed to operate transparently. Due process is an important legal term that traces its history back to one of the earliest legal documents, the Magna Carta.

In the Magna Carta, it states that “no free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any other way, nor will we proceed with force against him, or send others to do so, except by the lawful judgment of his equals or by the law of the land”.

Within the United States there concept of due process is mentioned in the 5th Amendment that “no person shall be … deprived of life, liberty, or property, without due process of law”, which in the context restricts the actions of the federal government.

It is also mentioned in the 14th Amendment “no state shall … deprive any person of life, liberty, or property, without due process of law”, which in the context restricts the actions of state and local governments These guarantees the opportunity to be heard in a proceeding by a court. It also safeguards citizens against the arbitrary denial of life, liberty, or property by the government outside of those that are stated in the law.

As an example of substantive due process, we look at the law of arbitrary and capricious review. If an administrative agency is making a law, it cannot be overruled by a greater court because the superseding court would have ruled differently. Instead the standard of review for the lower court is that the agency can only repeal the law if it “an agency rule would be arbitrary and capricious if the agency has relied on factors which Congress has not intended it to consider, entirely failed to consider an important aspect of the problem, offered an explanation for its decision that runs counter to the evidence before the agency, or is so implausible that it could not be ascribed to a difference in view or the product of agency expertise”

The government has to be pretty detailed in its explanation of why it has come to a conclusion. In Heckler v. Campbell, Carmen Campbell applied for disability benefits from Health and Human Services (an administrative agency) because she was no longer able to do her job as a hotel maid. She was denied benefits by HHS though because the Administrative judge ruled that she had other skills such as speaking english which made her able to do other jobs, and thus not eligible for unemployment. To her though this was not a good enough explanation of the jobs that she was able to do . She appealed the decision to a higher court, and the higher court agreed with her. “held that the medical-vocational guidelines did not provide the specific evidence of alternative occupations that would be available to Campbell”. It also held that because the court used guidelines, her due process was violated because she was denied the right to provide evidence that she could not do the jobs that were described in the guidelines.

What would happen if we started to have an AI algorithm make decisions? Would a court be able to accept it? How would such an algorithm fit within our current legal framework?

There is a landmark case in Wisconsin v Loomis that examines AI tools, that are a blackbox to humans, can be used within the legal system. In the case Eric Loomis pleaded guilty to two counts of drive-by shooting. The state of Wisconsin used AI to generate a report that recommended he be imprisoned for six years.

The case tested if AI violates a defendant’s rights to due process because the validity of AI can’t be challenged, as the AI system is a blackbox. It is trained on a large number of prior cases and then predicts a fair sentence based on the new data.

While we expect AI to be free from prejudice, AI shares many of the fallacies that humans do. While a human might be able to come up with an explanation as to why they determined a sentence for Loomis, an AI algorithm is not able to. Thus we have the question: is it fair for an algorithm to be used in the sentencing and is it a violation of Loom’s rights to due process?

The ruling form the court is that the AI algorithm is permissible as long as it is not the sole justification for the sentence. Judges using risk assessment tools must be able to explain the other factors that would support the results of the algorithm besides just the algorithm itself. Any explanation that uses the COMPAS system must include a number or warning and disclaimers that not the inaccuracy and problems with Loomis so as to temper judges’ use of such a system.

Conclusion

Through the various case studies of artificial intelligence within administrative law and its use by federal agencies, we see an increasing trend towards artificial intelligence being used effectively in government. Although there are far more ways alongside several barriers (including questions of privacy and due process), artificial intelligence that can improve workflow and better enforce laws will fundamentally transform how governments effectively and efficiently operate.

Although artificial intelligence has primarily been used in the private sector, oftentimes in enterprise settings, artificial intelligence born within the public sector, custom-tailored to address the issues facing government agencies is on the rise.

There are key considerations, however, that the government must address before we can start to see prolific growth in the utilization of artificial intelligence in government. The first one, of course, being questions around fair and due process that artificial intelligence might circumvent. Even as we better peer into the “black box” and understand decision-making processes, we must understand the trade-offs that artificial intelligence will force us to make. The trade-off questions around transparency and traceability become incredibly important when considering the enforcement of law to ensure equitable and explainable verdicts or the kind of oversight that must be in place to ensure that as much algorithmic bias is eliminated as possible in these systems. Although the new implementations of AI in government will most definitely save time, save money, and be more accurate, we have yet to fully understand and discuss the impacts of artificial intelligence will have on how different agencies operate. Indeed there will be a differential in the capacity of agencies to incorporate AI, but to ensure that when AI is used, it is used properly and does not cause more problems than it solves.

--

--