Transparency of alGOVrithms in CEE and beyond — TransparenCEE — Technology for transparency in CEE and Eurasia

Krzysztof Izdebski
Fundacja ePaństwo
Published in
12 min readSep 19, 2018

Author: Krzysztof Izdebski, ePaństwo Foundation

Algorithms aren’t going to go away, and I think we can all agree that they’re only going to become more prevalent and powerful. But unless academics, technologists and other stakeholders determine a concrete process to hold algorithms and the tech companies behind them accountable, we’re all at risk. — Megan Rose Dickey, senior reporter at TechCrunch

Introduction

This paper is based on the preliminary findings on the transparency of automated decision processes implemented by selected public institutions in Central and Eastern Europe and serves as a starting point for further discussion engaging all stakeholders. While the research on algorithms applied in the social media and their impact on societies are present in the public debate, the analysis of algorithms used in supporting decision-making process in the context of state-citizens relations is a relatively new phenomenon. We have not detected concrete steps taken by the CEE governments to work on the standards of incorporating algorithms into decision-making processes. This does not mean that public institutions do not use the automated process to regulate the legal and factual situation of CEE citizens. Therefore, our aim is to start a discussion to secure rights and freedoms of citizens and to guarantee that also this part of the governments’ activities are accountable and transparent.

We define alGOVrithms as automated selection or filtering processes, used by the government authorities in decision-making, whose output directly influences the citizens’ well-being. According to the definition contained in the online version of the Oxford dictionary, it is a process or a set of rules to be followed in calculations or other problem-solving operations, especially by a computer. David Harel[1] compares an algorithm with a cooking recipe. While components are output data, and a finished dish is the result, many activities such as selecting appropriate proportions at the right time or applied methods of thermal processing, etc. are just an algorithm. From the life experience one can easily deduce that one mistake at the stage of preparing a dish can lead to failure in its taste and appearance.

This scoping study is limited to those examples of “algorithms” that are created by central or local government institutions and implemented as software that influences and alters decision-making. One example is described in the Polish case in which ePaństwo Foundation requested the Minister of Justice to access the algorithm on the basis of which the Random Allocation of Judges System operates.

The study was conducted under the TransparenCEE Network and it will serve as a pilot of a bigger research project in the following months with a broader geographic scope and more detailed information collected.

General remarks

A lot of attention is put on transparency of algorithms in the social media also in the context of interference with the elections. Yet, algorithms — which are part of the governmental (but also legislative and judiciary) software and strongly influence citizens’ lives — are not fully researched and a vast majority of society is not aware of them. This is a growing problem globally. The authorities have little experience in establishing transparency of algorithms that should be a basic right such as Freedom of Information.

There are already stories to share like the French case of access to algorithm that influences students’ choice of university after the Baccalauréat exam, New York City Council which passed a bill to tackle algorithmic discrimination or a recent case in Poland where the Ministry of Justice has refused to reveal details of algorithm which is used to allocate judges to specific court proceeding. However, the scale of this phenomenon globally is not fully recognized. As stated by the Executive Director of AlgorithmWatch in El Pais, “It should not be a question of regulating artificial intelligence as a technology, but of controlling what people make with it to society and others.”

According to the report elaborated by the Association of Councils of State and Supreme Administrative Jurisdictions of the European Union, judges point out some risks in the automated decision-making process. “This is because automatic decisions often fail to include an extensive evaluation of the circumstances of the case. By contrast with automatic decisions, civil servants can explain the background of a decision better and therefore delimit any dispute during the course of a review.” It can also “lead to the assessment underlying the decision being unclear and non-controllable due to a lack of transparency relating to the choices made, and the data and assumptions used: the ‘black box’.” This opinion was also shared by the US Attorney General Eric Holder who said that ““Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” They also may “exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.” Although the Attorney General referred to problems concerning using algorithm in the justice system, which — as we detected — is not yet introduced in CEE countries, the risks of using automated decisions are similar in every topic. It was also identified by the Omidyar Network noting that “the use of automated decisions is far outpacing the evolution of frameworks to understand and govern them”.

As Kate Crawford, the principal researcher at Microsoft Research, put it in an interview with The New York Times, “if you are given a score [by an algorithm] that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.”

While algorithms are considered a key element of technology and innovation these days, not much has been done to understand their present and potential use by public administration in the Central and Eastern European region. This study aims to encourage further research and cooperation in this field. It is worth noting that the V4 governments have lately published an official letter to the European Commission stating that “the artificial intelligence can strongly support the reform of public administration in decision — making, e.g. in preparing regulatory impact assessment, so its use should be further analyzed and promoted.” The influence of new technologies (AI, blockchain, algorithms) on societies and governments is still undiscovered. We see the need to analyze the environment at the earliest stage in order to identify the ways to tackle potential risks and use opportunities.. Especially that we can already share some insights from the countries where this debate is more advanced.

Standards on algorithmic transparency

As in most spheres in which the state has a strong impact on individual rights and freedoms, the comprehensive system of check and balances should be put in place. We are aware of the fact that transparency of algorithms often relies on the contract signed between relevant authorities and companies delivering software but “while the technologies of major corporations like Facebook or Google are protected under a host of proprietary and trade-secret laws, states do have authority to push for public agencies to open up more about how their algorithms work.” On the other hand, transparency and accountability of algorithms are also crucial for those who are in power and who are responsible for the outcomes of the automated decision-making process.

As discovered by the Council of Europe, “One aspect here is that the developer of algorithmic tools may not know their precise future use and implementation. The person(s) implementing the algorithmic tools for applications may, in turn, not fully understand how the algorithmic tools operate.” Rebecca MacKinnon representing New America has observed that “algorithms driven by machine learning quickly become opaque even to their creators, who no longer understand the logic being followed”. Another problem which aroused around transparency of algorithms is that “Governments cannot disclose more information than they have. (…) governments simply did not have many records concerning the creation and implementation of algorithms, either because those records were never generated or because they were generated by contractors and never provided to the governmental clients. These include records about model design choices, data selection, factor weighting, and validation designs”. This problem is also connected with the presence of trade secrets and proprietary rights held by contractors. However, “the balance of competing interests must be resolved in favor of algorithmic transparency to the greatest extent possible. This is not to say that government need not provide some measure of just compensation if government discloses secret or confidential proprietary information. But the bottom line is that algorithmic transparency is essential to the continuance of our democratic system of governance.

Therefore, governments need to set up guidelines to hold algorithms accountable. The guidelines should be based on auditability, fairness, accuracy, and explainability.[2]

Although the GDPR requires data controllers to provide data subjects with information about ‘the existence of automated decision-making, including profiling (…) at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” is still a matter of relevant jurisprudence to decide on the practical implementation of the clause in terms of the scope of information provided. We can also see the first legislation in place starting from France where, according to the Digital Republic Bill, disclosure of the source code of government-developed algorithms is required imposing the obligation to publish online source code, databases, and any other data of public interest. The New York City Council has also adopted the Local Law in relation to automated decision systems used by agencies, according to which a special Task Force is set up responsible, among others, for developing and implementing a procedure through which a person affected by a decision concerning a rule, policy or action implemented by the city, where such decision was made by or with the assistance of an agency automated decision system, may request and receive an explanation of such decision. The Task Force will also as develop and implement a process for making information publicly available that, for each agency automated decision system, will allow the public to meaningfully assess how such system functions and is used by the city, including making technical information about such system publicly available.

We can expect that similar legislation will be also introduced in United Kingdom where parliamentary committee called on the ‘Centre for Data Ethics & Innovation‘ — being set up by the Government — to examine algorithm biases and transparency tools, determine the scope for individuals to be able to challenge the results of all significant algorithmic decisions which affect them (such as mortgages and loans) and — where appropriate — to seek redress for the impacts of such decisions.

Selected cases from CEE.

The study was inspired by the case of the ePaństwo Foundation which wanted to receive information on the operation of the Random Allocation of Judges System (RAJS) introduced by the Minister of Justice. The Foundation filed a Freedom of Information request but the Minister stated that the requested information did not constitute public information, claiming that the algorithm is a part of a source code which — according to the Polish jurisprudence — cannot be accessed or reused.

In our opinion, however, this algorithm is something much beyond information on how specific software operates. It decides on a situation of an individual citizen engaged in a court case. As written in a testimony presented by the Polish Judges Association “Iustitia”, “in order for the computer system to randomly assign cases to be able to serve the purpose of transparency and uniformity of assignment of cases, the assumptions to the system should be clearly defined and the method of their implementation must be written up and verifiable. Meanwhile, neither the assumptions nor the principle of operation are publicly known (…).” Incidentally, it should be pointed out that the System was financed with public funds and copyrights are vested in the Polish State Treasury. What is also worth noting is that the system was presented in October 2017 while the regulation describing its performance was passed on December 28, 2017. It remains in the sphere of guesses to determine how the system works. The Foundation has filed a complaint to the Regional Administrative Court in Warsaw to reverse the Minister’s refusal and it awaits the date of the Court seating.

This was however not the first case in Poland where Civil Society Organizations actively acted towards transparency of alGOVrithms. In May 2014 the Polish Ministry of Labor and Social Policy implemented a system based on profiling the unemployed to decide on how to distribute labor market programs among specific categories of citizens registered us unemployed (e.g. job placement, vocational training, apprenticeship, activation allowance). The system works on data collected during a computer-based interview with the unemployed combined with 24 different dimensions implemented in the electronic database and each of them is assigned with a score. The final score is determined by an algorithm. The Minister also denied an access to the code. The Panoptykon Foundation which performed an in-depth study on algorithms also claimed that citizens are restricted from receiving access to information on how the system works as its operation is treated as confidential. The authors of the study wrote that “lack of transparency in the process of profiling in this case is directly related to the choice of the computer system as the main decision-making tool and the decision to keep the underlying algorithm secret (even from the frontline staff who are responsible for carrying out the interview with the unemployed).”

Other examples cover decisions regarding road transport mainly based on the data included in the Central Register of Vehicles and Drivers. On the basis of the information included in such collection, the competent authorities issue, among others: decisions regarding vehicle registration (e.g. refusal to register a vehicle recorded as a stolen vehicle) and decisions on imposing fines (e.g. against a person appearing in the register as the owner of the car who drove through a monitored section of the road under a control device without payment of the required fee — in this case the basis for the penalty is also relevant registration records and photographic documentation). Similar automated decisions are implemented in Lithuania, Czechia, Romania and Slovakia This type of algorithms usage also discovered in Hungary where the Hungarian General Administrative Procedure Act (Act CL of 2016) regulates the so-called automated decision-making procedure for cases where all the evidence and relevant data are available to the authorities, and the decision-making does not involve discretion or opposing parties. Estonia has also introduced the automated decision-making process in the tax system (automated control of income declaration by the data processing system) or allocating children to specific schools in the city of Tartu.

Summary

The general public and each individual should have the right to access information on the automated decision-making process. The public authorities should secure this right from the first steps of the process of creation and implementation of algorithms by preparing and presenting regulatory impact assessments, introducing transparency clauses in contracts with companies delivering the software, issuing guidelines explaining the operation of algorithms and elaborating the review and remedy system. States should also grant access and perform above tasks within the existing and already operating automated decision-making processes.

Appendix

Based on the knowledge presented above, we see the need of performing further in-depth study on using alGOVrithms covering the following questions elaborated in cooperation with the partners from the TransparenCEE Network. We strongly encourage to use it to identify the alGOVrtihms landscape in specific countries.

  1. Do authorities implement algorithms in software?

Name identified examples and describe how they work or might work by answering the below questions (Which state sectors are using algorithms?)

This question will serve as a place to describe how the algorithm works, whom it targets, what the results of its implementation are.

  1. How is the alGOVrithm regulated?

This question will help to gather information whether algorithms are regulated by law (and describe it if the answer is yes) and if not, whether there are any other documents (i.e. internal regulation) in place.

  1. Who has created the algorithm?

Here we are referring to at least two groups of people. If the algorithm and software which uses it was created by a public institution or outsourced to an external company and if the latter, on what legal grounds (i.e. public tender)

  1. Is the algorithm open to the public and who has an access to the algorithm?

Is the software using the algorithm open source? Is the algorithm code open source? Is it possible to access the algorithms using freedom of information request, or it is restricted only to a selected group?

  1. Who controls the algorithm’s accuracy/fairness?

Is there a system of noticing if there are doubts about the algorithm accuracy/fairness? Is there a system of remedies? Can individuals or organisations appeal to the algorithm’s prediction? If yes, on what grounds?

[1] David Harel (1987) Algorithmics. The Spirit of Computing

[2] See: http://www.fatml.org/resources/principles-for-accountable-algorithms#social-impact

Originally published at transparencee.org.

--

--

Krzysztof Izdebski
Fundacja ePaństwo

Krzysztof Izdebski — lawyer and civic activist. I am the Policy Director and Board Member in EPF http://epf.org.pl/en/. Follow me at TT @K_Izdebski