The Ethical Implications in the Application of Artificial Intelligence in Law

Obaidul Hoque Chowdhury
The Startup
Published in
14 min readMay 26, 2019

Big Data has revolutionised the way we interact with machines. The ability of a machine to construct an image of the user based on information provided by the user is called user profiling. This presumption is not limited in application to a user alone. Profiling is information that can also be used by an intelligent machine to make calculated decisions. In today’s world, Artificial Intelligence (AI) is at the forefront of change. From automating operations to providing personalised information for better decision making, AI is playing an important role in the quest for technological innovations. The concept of AI is based on an intelligent agent that receives input variables from an environment and provides a response/set of actions. In legal informatics, application of AI has been stagnant due to the challenges of diversity in natural languages. For example, extracting information from a legal document was simplified through analysing data points that refer to certain word clusters irrespective of format, structure and design. It was simplified because the data points were static. Historically, judgements, legislation, and the litigation process are dynamic data points and are challenging for literal text processing machines. However, in recent times, a case has been made for self-regulating legal artificial intelligence. Machine Learning (ML) algorithms are a group of learning algorithms which are applied by inputting historical data for pattern recognition. It analyses past experiences, learns from them and understand the implications of an action based on real-world scenarios. Another sub-category of AI is Deep Learning (DL) which integrates multiple variables in a human problem to identify complex scenarios using simple concepts. An example is image-recognition where simple variables such as object identity, object angle, object contour and object colour pixels are trained to identify images. AI has been adopted by majority of the industries, but fundamental challenges to the core ideology of AI adoption is still up for debate. The presumption that AI systems can be biased is based on its core principle of using historical data for decision-making capabilities. Based on a survey conducted by Mckinsey Group in 2018, the challenges facing adoption and effectiveness of AI is down to formulation of effective AI adoption strategy, lack of talent for AI work, limited applicability of insights from AI and limitations in AI ethics and governance strategy.

Creating transnational governance strategies such as GDPR that is loosely based on a fit-for-all criteria can be challenging to regulate given the current circumstances under which companies are scrambling to incorporate AI into their business function. Hence, the objective is to critically evaluate GDPR derived issues on ethics in the field of Legal Informatics. The field of legal informatics is defined as theory and practice of computable law. In certain definitions of Legal Informatics, it has been described as the quantitative analysis of legal behaviour, prediction and formulation of law using mathematical logic and legal information retrieval using automated means.

The application of AI technology is currently at a high pace. The hype surrounding its ability and adoption in different industries is leading to certain misrepresentation of the capability of AI. In the Harvard business review, Author Andrew Ng discusses the limited capabilities and the enormous potential for the future of AI. This information is often misrepresented and has led companies to invest millions in AI, without understanding the need for AI, its capabilities and the unintended consequences. Investigating ethical dilemmas in major industries is always going to be a stretch for a single article. Hence, I am going to give a shot at explaining how AI can be applied in Legal Informatics, and also the ethical dilemmas that can rise due to its adoption.

AI in Legal Informatics

As discussed earlier, legal informatics is the application of information management and science in the field of law. AI is applied in a variety of topics across the legal industry in corporation and government levels. Basic implementations of AI in law has been in the form of automation of repetitive tasks, legal research, document management, text classification of legal documents, risk assessment etc. Below we discuss some of the perceived future ability of AI in Legal Informatics.

Quantitative Legal Predictions

The application of AI to make legal predictions in political and social sciences is not a new occurrence. The sub-category of AI called decision trees has been increasing in application for legal predictions. One such example for political sciences, was the prediction of supreme court ruling using a sub-category of AI called Boosted Decision Trees. The simplest prediction using the algorithm, called AdaBoosted Decision Trees (ADTs), was that in two-thirds of cases, the petitioner wins the decision of the court. Given that court decisions can fluctuate on little information provided by prediction variables along with unseen factors, ADTs work well with dimensional data by limiting the effects of irrelevant variables when required. For example, ADTs have high prediction accuracy for US presidential elections where the incumbent president wins when the economy is thriving unless in case of an unpopular war. ADTs can render correlation of variables that do not affect predictions in a given political circumstance ineffective.

Legal Reasoning using data and logic-based models

The challenge with legal course of argumentation is the major obstacle when it comes to AI legal reasoning. Data-driven techniques can be used to formulate rules for intelligent machines to process legislation. Litigation however requires AI to understand basic human discourse in a formal court procedure. The process becomes complex in general court proceedings where language and argumentation are conducted in natural language. Analysing such stacks of data is always challenging given the nature of the variation in human style of speech and reasoning. However. recent advances in the sub-category of AI called Natural Language Processing (NLP) has been explored in research for law. The challenge is studied through formulating a legal rule-based clusters that can match a legal clause or part of a sentence with natural language use in litigation.

Legislative Modelling

Western democracies go through phases of legislative activities, called political legislation cycles (PLS). PLS is often related to political cycles, a period where political change is bought through legislation for economic transformations. These cycles of legislatures can be measured by a multivariate time-series analysis of total legislation counts along with taking in to account the many political events that can influence a piece of legislation. This AI model is developed independently for each nation-state given that public sentiment and societal influence are unique in each country.

Projections for The Litigation Process

The litigation process begins when the lawyer, acting on behalf of the claimant will send a letter to the Defendant explaining the case and period for response. If the party does not respond within the given time, the Claimant will file Particulars of Claim to the appropriate court and Defendant along with witness statements or evidence to issue proceedings. Once proceedings begin, the court will determine the case and ask for a case management conference, setting out the timetable of the case. The hearing date will be set based on the timetable and proceeding will begin thereafter. Questions by clients about the likelihood of winning, settling a legal proceeding or timeline projections can be tackled through the predictive ability of AI. Another example of the litigation process is the divorce settlement process that typically takes around a year and costs $27,000 on average in the United States. The solution, provided by a company called Wevorce, provides a plan to manage optimal outcomes for couples, co-parenting planning and information about the spouses along with legal counsel when required.

Legal Personal Assistant

Deep Learning NLP algorithms have become the go to AI application for developing Chatbots for law. They are developed using classifier-based learning methods. In law, services such as Rocket Lawyer helps individuals and businesses to form their own legal documents without the requirement of hiring a lawyer.An example of a chat bot that provides legal assistance for appealing parking fines in major cities such as London, Oklahoma, New York etc. is an application called DoNotPay found on android/iOS platforms.

Why Ethical Governance Strategy

The basis of creating an ethical governance framework is to provide guidelines to organization on how to apply AI in Legal Informatics which adheres itself within the boundaries of ethical practice. A case is generally made within the industry that organizations are limited by the magnitude of regulations that are rolled out by governments for unethical conducts. However, it is possible to align the organization in relation to ethical ideology, while still staying relevant and profitable. In a lot of circumstances, companies do not identify the implication of these practices and continue with their operations with the perception of profitability. In the Cambridge Analytica scandal, the harvesting of 50 million Facebook user profiles for the Trump Campaign for behaviour manipulation raises questions on how much trust can be placed in AI when in wrong hands. The problem is not just about the ethics of use, but also of developing legislation to protect basic human rights. Legislators and holders of political offices lack basic understanding of intelligent machines which leads to obscuring of actual issues related to AI ethics. Hence, it is important to create an ethical governance strategy that will provide a framework for developing technologies which in turn improves human activities along with eliminating human bias and manipulation of human behaviour.

Ethical Challenges with AI

The ethics of decision-making is an important topic in a free and democratic society. This section will discuss challenges with AI in general and dive deep into the philosophical context of these challenges.

Algorithmic Prejudice

Intelligent Machines are currently employed for a magnitude of functions. They review our mortgage application, provide risk profiles for individuals and assist us with legal matters. The prejudice however can be traced back to their perceived decision-making abilities through skewed input data, false logic or biases that is reproduced by AI programmers. For example, a 2016 study conducted by Human Rights Data researchers found that PredPol, an algorithm designed to predict likelihood of crime taking place in the US, unfairly targeted black and Hispanic neighborhoods.

Threat to human Causes

The ethical issue most discussed about during the investigation into Cambridge Analytica was how AI was used to manipulate and modify human behavior. Ethical challenges under this topic include human displacement in industries by AI, invasion of privacy, social grouping etc. Other forms of threat to humans can be in the form of behavioural modification where an insurance company can install a monitoring system into the vehicle of the individual to enhance driving habits. These arguments can be traced back to the requirement of legislations which classify unethical AI practices as illegal practices.

Inequality

The current economic system is based on compensation for hourly labour based on contributions to the economy. However, a case is being made where profit-driven companies drastically reduce the human workforce to improve profitability. Although the annual chaos report by Standish Group states that only 36% of all IT related projects have been successful, there are still concerns about the distribution of revenue. The three largest companies in Detroit and Silicon Valley generated roughly equal revenues, albeit with 10 times fewer employees for the Silicon Valley based companies. Hence, the fundamental ethical dilemma here is how to manage the widening gap of income inequality created by machines that leave the workforce unemployed.

Responsibility

Facebook’s Cambridge Analytica scandal was particularly damaging to the public image of the company due to the inability of Facebook to take appropriate steps after discovery of the misconduct related to behavioural manipulation based on Facebook user profiles. This gives a picture of a company that can be branded as too big to fail and creates an ethical conundrum in legislative bodies. The conundrum that arises in this scenario is accountability for the actions of AI deployed by an organisation.

Ethical Governance Strategy

Current application of AI is limited due to the requirements for big data, the complex methods of algorithm and a lack of cognitive ability of AI to make self-regulating decision. The idea of self-regulating decision-making by AI has been misrepresented in all levels. For example, in the US, defendants have to pay for bail to avoid jail time while awaiting trial. This has come under scrutiny from rights groups who rightfully contradict the system by arguing that bail should be accessible to all regardless of income. Hence, states have started implementing AI to assess risk profiles of individuals. This has also led to some speculating that AI systems might pick on prejudices from the existing system. Thescepticism in relation to racial or communal bias has called for a review in adoption of AI and its ability to provide significance in legal matters. American Civil Liberties Union, the powerful American civil rights NGO has in principle been against the adoption of artificial systems in law as long as the fundamental flaws in AI are not dealt with.

The section above draws a grim picture about the applications of AI. However, it is hard to deny the observable element that AI does improve productivity and efficiency. The core principle for deploying AI historically was to assist human beings, not to replace them. Although national or region-based regulations such as GDPR are crucial in enforcing AI principles, its implementation however cannot encapsulate the cultural differences that is created by AI applications in a plethora of industries. Hence, this report will look to device an ethical governance strategy for AI applications in Law.

Use of Data

Responsible use of data is based on the long-term strategic decision-making of the company. The responsible use of Data must meet a legal basis for collection and has to incorporate the consent of the individual. The use of data has to comply with fairness while processing data in a way that it is not detrimental or misleading for the individual.

Processing of Personal Data

Processing of Personal Data falls under the GDPR guidelines for individual rights of consent, access, erasure, security and accountability. Organisations must consent before collecting data, inform the individual for the reasoning behind processing the data and the methods of processing the data. An example of complying to this requirement would be to create a web article highlighting the company’s commitment to security, privacy and processing techniques of individual data.

Human Factors

Ethical challenges in AI can also be attested to the dynamic human behavior. The amazon recruitment tool failed miserably because the developers expected to hire based on historical data. The implications of historical behavior of individuals, who are clustered into a sub-category, does not necessarily dictate the actions of future responders. Hence, robust procedures for identification of quantifiable variables of change such as human behaviour is paramount in having unbiased AI systems.

Accuracy

An intelligent machine that has decision-making capability must have access to accurate data. For example, a wrongful diagnosis of a medical condition even after recording of correct diagnosis has to be held within the patient’s medical records. This information is relevant for the purpose of explaining the treatment given to the patient. Reasonable steps should be taken to ensure data accuracy and for deliberation of any unintended consequences that could have been created by processing of inaccurate data.

Security

Security of user data constitutes a large portion of GDPR and will lead to hefty fines in cases of data breaches. Data can be secured through encryption, masking, backup and erasure of inoperable data. GDPR also requires organisations to conduct a risk analysis, evaluate organisational policies, and physical and technical methods to secure data.

Unintended Consequences

AI systems are becoming a core driving factor for modern societies. The adoption of technologies such as smartphones, smart TVs, Internet of Things etc. has created a society that is dependent on intelligent systems. Users requirement for them can create a reliance on technology that can have unintended consequences. For example, the COMPAS AI system was implemented for the purpose of making decisions on parole. However, the system began discriminating against African-American and Hispanic men. Although the system was intended for a noble cause, there was however a bias that was neither intended not forecasted. Robust measures for oversight into consequences of AI application can help classify any unintended consequences.

Validation of AI systems

Evaluation of AI system is required to understand if the responses it generates can be qualified as valid assessments. Hence, variables that indicate validity of an analysis, assessment or decision needs to be placed within an AI system to alert the algorithm whether it can certify the results to be valid. Causality against correlation dictates the current literature research conducted in AI validation. Validation of AI systems revolve around correlation to a measured attribute which acts as an indicator. The correlations conducted by Tyler Vigen, which shows people who died falling out of wheelchair had high correlation to rising potato chip prices, prompts the discussion about correlation and causation. Hence, companies looking to employ AI systems need to have a robust framework for AI validation.

Invisibility

AI services are universal in current society and it is predicted that the number of AI-powered robots will increase by 1.7 million in 2020. As the need and use of AI technologies increase, they start becoming increasingly invisible whereby creating ethical challenges based on trust and transparency at homes, and equality, fairness and protection of civil liberties in the workspace. It falls on the organisations who possess such capabilities to ensure responsible processing of data.

Profiling for Automated Decision-Making

GDPR has specific provisions on automated decision-making and profiling based on individual data. Automated decision-making is a process without human involvement, whereby a decision is generated based on factual data and the analysis of individual behaviour, personality, interests etc. The challenge with this is that in majority of the cases, the user is unaware of the logical reasoning behind the decision and how such conclusions might have been reached. Automated decision-making and profiling can comply with GDPR by informing individuals about how data is processed. Special interests are to be placed to protect the interests of vulnerable groups such as children and minorities.

Creation of Rule-Based Framework for Algorithm Modelling

Algorithm modelling is based on the concepts of adequately defining the requirement, choosing a measure of success and setting an evaluation criterion. GDPR requires maintaining a record for all technical and organisational measures of algorithmic modelling. However, setting up a committee which forms a rule-based algorithmic modelling framework will align with the interest of the company and its stakeholders. The framework will analyse the algorithm, its characteristics and flag any unintended consequences that can arise from its use. An argument can be made that the process could be tedious and time-consuming, however it protects the organisation from unknowingly participating in unethical conduct and being charged heftily under GDPR.

The idea of AI has been focussed around improving the efficiency of human activities. The paper does not deny the value AI can generate, but rather discusses the steps we need to take to improve the process of AI deployment. Human biases have been the central issue of all social applications. Integrating ethical discussion of application of AI in law is vital given that this area of human activity is crucial to well-functioning society. As summarised by Charles E. Clark in 1942, the function of law is to serve the society, rather than enslavement it. This statement is further from the truth when it comes to the developing relationship between human activities and Artificial Intelligence. It is important for organisation to implement an impact-based evaluation strategy of privacy, security and data practices to understand the opportunities of AI and the risk of unforeseen consequences.

Regulations governing ethical use of AI is a strenuous task, but nonetheless a necessity. In 1877, B&O railroad company from West Virginia cut workers’ wages for the third time that year.As a result, a violent strike lasted for 45 days where many workers lost their lives fighting to protect workers’ rights and for the establishment of sustainable frameworks. It took many years to recognize the damage caused by Industrial Revolution through devaluation of basic human rights and social values. In present times, we are faced with a similar challenge which could ultimately lead to the rejection of AI much like how it transpired in the Industrial Revolution.

--

--