Do you know what your algorithms are up to?

Krzysztof Izdebski
Fundacja ePaństwo
Published in
4 min readJul 16, 2019

If there is a general lack of trust towards governments today, are we then — as citizens — also right to distrust the tools provided to us by the public administration?

An answer to this difficult question is becoming very urgent in the field automated decision making and AI.

Automated decision making is a system that uses automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans, and has recently become a hot topic discussed at various conferences and policy meetings.

More and more governments show involvement in these debates by elaborating their strategies, or — in some rare cases like Canada or France — preparing concrete regulations.

Still, many governments seem to treat the topic as the foundation for the implementation of Artificial Intelligence into administrative or judicial procedures, rather than concentrating on the actual state of play.

Kindergarten algorithms

In the Polish city of Wrocław, the local authorities have implemented a software tool to organize the recruitment of staff for local kindergartens. According to frustrated parents, however, the process has been far from perfect.

One of the parents interviewed by the local news station said that “in order to have access to up-to-date recruitment information, we need to get organised on social networks, compare results, collect errors, inform the office of any shortcomings, interrogate officers who are completely unprepared to provide information and who send us back to other equally uninformed officials. Then we write to appropriate institutions without receiving any answers.”

This example illustrates how a lack of policies around automated decision making can undermine trust to both public authorities and services they provide to the public. And the problem is not narrowed to one city or one country.

Despite the ongoing debate on the dangers of algorithms’ influence on human rights, we have not identified any existing overall government policies regarding the implementation of algorithms in any of the countries participating in the research

It is a common phenomenon revealed in the recent report on “alGOVrithms. The State of Play. Report on Algorithms Usage in Government-Citizens Relations, that I have co-authored, and which was published by Poland-based ePaństwo Foundation working with the partners from Georgia, Hungary, Serbia, Czechia and Slovakia. The research on the current automated decision making environment in the Central and Eastern European countries provides evidence that citizens have already quite a few reasons not to trust the authorities.

No legal and ethical framework

In the report we find evidence of automated decision making implemented in a large number of public spheres including speed control, allocation of judges and other public officials, choosing batches for conducting controls and inspections, distributing social benefits, detecting frauds or even preselection of contractors in public procurement.

Despite the ongoing debate on the dangers of algorithms’ influence on human rights, we have not identified any existing overall government policies regarding the implementation of algorithms in any of the countries participating in the research.

There are no detected examples of ethical or legal frameworks comprehensively describing personal responsibility for the tools’ application, safety of their implementation or the rights and obligations of the states and citizens in this regard.

The knowledge of their functioning remains classified. This is not only problematic for citizens, and those desperately looking for information on the problems concerning the kindergarten recruitment process, but also for public officials who have no knowledge on how the tools they use for performing public tasks actually operate.

Retaking public control

Algorithms used in software created for automated decision making are not subject to transparency and getting public access to the algorithms or the source code which includes them is not possible.

For instance, it is not only citizens but also judges, who are deprived of access to information related with Random Allocation of Judges System, which is used in Poland, Georgia and Serbia.

Frequently, only companies creating tools for public institutions have the full knowledge of the algorithms’ functions and do not undergo any external auditing processes determining the fairness and accuracy of the tools.

Additionally, none of the researched countries established a coordinating body responsible for monitoring automated decision making implementation, including the creation of tools and their performance.

The report is followed by Policy Recommendations that should be immediately implemented by governments in the region and beyond.

As more and more tools in the public sector make use of AI, the problems described in the report will only get worse. Therefore, we recommend that governments should:

  • Introduce new policies regulating automated decision making, describing transparency of the implementation process and functioning of the tools;
  • Use the Algorithmic Impact Assessments (AIA) on the systems used in the area of law-making like Regulatory Impact Assessments, where an institution responsible for the tools’ implementation is obliged to demonstrate the necessity of their use, their impact on citizens’ rights, definition of the risks and methods of evaluation;
  • Create the mechanisms for independent control of algorithms;
  • Institute legal guarantees to counteract discrimination and provide effective means of legal protection.

Finally, the multidisciplinary approach in the process of creating algorithms is a must.

Human rights organisations play an important role in providing expertise on social inclusion and equality, prevention of discrimination and transparency of governments.

There is also a growing civic tech community which can assist governments in creating tools which are truly aimed at supporting citizens and public interest rather than comforting the authorities and wrongly understood “effectiveness” of public institutions.

This article was originally published at apolitical.co on July 9, 2019

--

--

Krzysztof Izdebski
Fundacja ePaństwo

Krzysztof Izdebski — lawyer and civic activist. I am the Policy Director and Board Member in EPF http://epf.org.pl/en/. Follow me at TT @K_Izdebski