AYS Daily Digest 27/12/2019: iBorderCTRL a great success…for whom?

Are You Syrious?
Dec 28, 2019 · 11 min read

New summary brings more attention to quiet pilot project to implement AI at EU borders // Call for shelter in cold Athens // Reminder that people will be kicked out of reception centres on NYE in Italy /

FEATURE: The Downsides of the Uploading of Humanity

Novara Media published an important article by Robin Warrin, providing an update on a troubling project being tested at several points at Europe’s borders. The project was called “iBorderCtrl” and is oriented around the integration of technologies into a border checkpoint. The project which ran until summer of 2019 for six months, volunteers were asked if they wished to participate in the proposed new method of border control which incorporates traditionally-used border security methods as well as new ones. The checkpoints where the pilot project were held were at the Latvian-Russian border, the Hungarian-Serbian border, and at the Greek-North Macedonian border. It is presumed that as this is a project funded by the EU Horizon project, and as it is designed to strengthen the EU’s borders that it was deployed for travelers entering from non-EU states to EU states (i.e. from Russia to Latvia, from Serbia to Hungary, and from North Macedonia to Greece), however this is unclear.

What is iBorderCtrl?

According to the EU’s summary of the project kickoff :

“More than 700 million people enter the EU every year — a number that is rapidly rising. The huge volume of travellers and vehicles is piling pressure on external borders, making it increasingly difficult for border staff to uphold strict security protocols — checking the travel documents and biometrics of every passenger — whilst keeping disruption to a minimum.

To help, the EU-funded project IBORDERCTRL is developing an ‘intelligent control system’ facilitating — making faster — border procedures for bona fide and law-abiding travellers. In this sense, the project is aiming to deliver more efficient and secure land border crossings to facilitate the work of border guards in spotting illegal immigrants, and so contribute to the prevention of crime and terrorism.

‘We’re employing existing and proven technologies — as well as novel ones — to empower border agents to increase the accuracy and efficiency of border checks,’ says project coordinator George Boultadakis of European Dynamics in Luxembourg. ‘IBORDERCTRL’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.’”

According to official documents that were released via a freedom of information request from the Hermes Center for Transparency and Digital Human Rights, the iBorder Control System incorporates four sub-systems:

Automated real time deception detection system (ADDS)
Biometrics tools (BIO)
Travel document authenticity analytics tool (DAAT)
Face matching tool (FMT)

The second and third systems are essentially finger print/palm print and passport/visa verification systems and are often already-integrated into the border checking process. The face-matching tool additionally is incorporated into some aspects. The newest addition to the battery, and the one raising the new questions is the “Automated real time deception detection system” or ADDS.

The ADDS system relies upon software developed by Silent Talker Intelligent Systems, a UK company that focuses on AI-based psychological profiling. Their very ugly website includes this blurb:

“Designed for use in natural conversation, Silent Talker combines image processing and artificial intelligence to classify multiple visible signals of the head and face that accompany verbal communication. From these, it produces an accurate and comprehensive time-profile of a subject’s psychological state.”

This is a lot of sciency-sounding woo, but essentially, the AI is a trained algorithm that has been fed a lot of data based on the concept of “affect recognition” which posits to identify the emotions that an individual is feeling based on their facial expressions (specifically in context of what they are saying).

How it works with iBorderCtrl is that travellers conduct a “pre-travel interview” with an Avatar — a digital border guard who asks them questions. They are being recorded and the software analyses their facial expressions to “assess if they are lying”, producing a score out of 100. If they are found to be lying, they will be taken for further questioning by an in-person border guard.

The challenge of affect recognition is that, well, human emotions are complicated and there is rarely a one-to-one or even two-to-one mapping.

Generally speaking, one could train an AI to recognize that something is out of the ordinary; however, this isn’t water-tight by any means. “Expected” facial expressions vary based on a wide variety of features including gender, cultural background, age, emotional state.

The Novara article summarizes many of these conclusions, and the team at Intercept themselves had two people to test them out. In their investigative report “We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive”, Ryan Gallagher and Ludovica Jona describe how the iBorderCtrl process works. Their reporter answered the questions truthfully, but received a surprise.

A person judged to have tried to deceive the system is categorized as “high risk” or “medium risk,” dependent on the number of questions they are found to have falsely answered. Our reporter — the first journalist to test the system before crossing the Serbian-Hungarian border earlier this year — provided honest responses to all questions but was deemed to be a liar by the machine, with four false answers out of 16 and a score of 48. The Hungarian policeman who assessed our reporter’s lie detector results said the system suggested that she should be subject to further checks, though these were not carried out.

Travelers who are deemed dangerous can be denied entry, though in most cases they would never know if the avatar test had contributed to such a decision. The results of the test are not usually disclosed to the traveler; The Intercept obtained a copy of our reporter’s test only after filing a data access request under European privacy laws.

The human element here (in-person border guards who receive the “score” from the computer) is another issue that was raised by Amnesty International, highlighting the risks of poor training and personal bias. “The computer says you are a liar, so a liar you are” could easily lead to a combination of bias and bad technology denying someone’s freedom of movement.

It should be noted that presumably, the AI system is meant to replace the guard at the counter who would ask and assess one’s truthfulness, referring a traveller to additional questioning should this border guard assess one to be lying. Indeed iBorderCtrl purports that it might be more fair than a human border guard.

“…[It] is important to accumulate knowledge on the advantages and disadvantages of the technology, and have an open debate on the issues as the basis for an informed democratic decision of society at large. This will help to ensure that such a system will only be used at the border if it provides fairer and better results than the current system, solely relying on human beings. For instance, an AI-based system with high accuracy might prove to decrease the risk of discrimination and other fundamental rights issues if designed and implemented properly.”

One would assume that the AI is an continuously learning program that incorporates results from each interview into the new algorithm, however the dataset that is used to train the original algorithm as well as the bias of those designing the software would still play a significant role.

In a Gizmodo article published last year at the project initiation , some troubling shortcomings come to light.

“[The] automated lie-detection system was modeled after another system created by some individuals from iBorderCtrl’s team, but it was only tested on 30 people. In this test, half of the people told the truth while the other half lied to the virtual agent. It had about a 76 percent accuracy rate, and that doesn’t take into consideration the variances in being told to lie versus earnestly lying. “If you ask people to lie, they will do it differently and show very different behavioral cues than if they truly lie, knowing that they may go to jail or face serious consequences if caught,” Maja Pantic, a Professor of Affective and Behavioral Computing at Imperial College London, told New Scientist. “This is a known problem in psychology.”

Furthermore, the Novara article highlights that the number of facial expressions the AI is trained to recognize is 40*, and that the initial “training data” (the data that is used to train the algorithm) was based off of 32 individuals — 10 of them of “Middle Eastern/Asian” descent and 22 of them were of “White European” descent.

*another article sates that it is 38.

In 2018, in a roundup of AI events that happened, and in many other research articles, it has been found that facial recognition is quite poor at recognizing and processing non-White faces, often due to an lack of data. This often yields higher false positives and flawed assessments. We see this lack of data reflected in the pilot program above as well as the comparative over representation of “White Europeans,” which thereby could indicate a bias against non-White subjects in an already small sample of training data. Couple this with the possible bias and lack of training for border guards implementing the technology. A higher likelihood of the computer marking an non-White person as high risk, this assessment then helping to inform the guard assigned to do the follow up interview (and possibly compounding their own internal bias) — it’s a foreseeable outcome.

However it should be noted that the argument of a lack of representation in training data could be used as a justification for further encroachment on privacy. “We need to data to train the algorithm in order that it is more ethical!” The passive capture and use of data remains a rightfully contentious issue, one which iBorderCtrl attempts to feebly manage in their FAQ:

“ In order to protect the right to privacy, participants were asked to provide their informed consent prior to participating in the test pilots. Before doing so, they were informed about both the data processing and their rights as data subjects. Participants could also withdraw from the test pilot at any time and ask for their data to be deleted. Moreover, data collected in the test pilots was not shared with any 3rd parties (i.e. law enforcement agencies) and was deleted or anonymised after the project testing phase concluded in August 2019.

With regard to a possible discrimination of research participants through the system, it has to be noted that a validation also serves the purpose of detecting malfunctions (including inadvertent bias) of the system. However, as the sole purpose of the pilots was to detect such issues, and the data was not used for any other purpose, no real-life disadvantages arose for participants.”

However, the details regarding iBorderCtrl remain notoriously opaque, being deliberately obscured. In the Novara article, Warrin states:

“Questions surrounding the accuracy of these algorithms are, at current, largely left up to speculation, because despite these EU-funded projects having concluded, the public has not been given access to the ethics reports, legal assessment or the pilot results on the basis that it undermines the profitability of the company.”

Indeed, this can become apparent when browsing the documents obtained by Hermes under the freedom of information rules. The summary of the technologies involved in iBorderCtrl is overwhelmingly redacted, giving the public scant understanding or insight into how these programs are working or possible shortcomings — all, apparently, under the auspices of it being “proprietary information.”

Publically-funded proprietary research to the tune of over 4.5 million euros that may or may not become implemented or possibly sold on to the highest bidder?

Essentially iBorderCtrl’s argument goes as follows:
Human border control agents can be biased ->

However this argument is severely undercut by the number of hurdles that individuals needed to overcome in order to access their public data, as evidenced by the Intercept’s experience. Furthermore, the details on how the entire process works is withheld on the basis of it being proprietary information. Oh and conveniently enough one of the ways to counter against the biases of this AI-based technology is to acquire more and more data to feed to the algorithm? Thus actively encouraging the passive capture of personal data which will be dutifully “destroyed or anonymised” (yet most likely preserved within the algorithm which can ostensibly be repackaged and sold off elsewhere?) And this is supposed to contribute to an “open and democratic debate” on the use of AI in border security?

Along these lines, the latest update from iBorderCtrl seems to have been made around the project kickoff in October of 2018, with them beaming over the afore-linked EU press release for the project being filed under “success stories” (even though the project had not even been implemented yet — ). It must be a a very great project indeed if it has been deemed a success before even finishing.

Although this project aims to primarily target those travellers who are fortunate enough to have a passport, they are all links in the chain in strengthening and building up the border-security-industrial complex that is hungrily devouring the individual in order to support the state. Lest we forget border security technology is often first tested on those whose lack of privilege renders them vulnerable.

Please read the original articles inspiring this feature at Novara and the Intercept. And to take a closer look at the hundreds of blacked out pages, er, project summary released by iBorderCtrl, go here.

TURKEY

German chancellor Merkel will probably visit Turkey’s president Erdogan in January. The information about the meeting has been published by the German newspaper Süddeutsche Zeitung but was not yet confirmed by the government. According to the newspaper, the adherence to the 2016 EU-Turkey deal will be the main subject in the meeting. Since Russia and Syria are bombing the last opposition-held areas in and around Idlib in Syria, more than 235,000 people fled the region, UN says. Erdogan says that some 80.000 of them are fleeing in the direction of Turkey. He stated earlier that Turkey in the case of a new refugee influx would not bear the burden alone. He threatened that Europe, especially Greece, would feel the negative consequences. It is expected that Merkel will do whatever she can to keep the EU-Turkey deal alive. The deal basically says Turkey will try to stop people from crossing over to Greece in order to apply for Asylum in the EU.

GREECE

STEPS Greece is calling upon the employees of Attiko Metro to open the subway stations in order to provide shelter from the bitter cold for those who have no shelter.

“As you probably already know, the majority of street connected people lives in areas of the center of Athens. The Municipality of Athens seems to be indifferent to this fact, since the only shelters for homeless people that are going to remain open the next days are located in areas of the wider region.
By extension, the people of the centre will face the cold without any option to protect themselves.

We appeal to the employees of Attiko Metro to let the subway stations of Omonoia and Monastiraki open so that these people can at least find shelter there.

#StepsGR #StandingforPeople

ITALY

As we have been reporting on Monday, on New Years Eve people holding “humanitarian protection” will supposedly be compelled to leave the Italian reception system. The new regulations go back on a decree issued by former interior minister and Lega leader Matteo Salvini in October 2018. It basically abolished the “humanitarian protection” statues for vulnerable people without refugee statues or subsidiary protection. For more information, see Monday’s digest here.


Find daily updates and special reports on our Medium page.

If you wish to contribute, either by writing a report or a story, or by joining the info gathering team, please let us know.

We strive to echo correct news from the ground through collaboration and fairness. Every effort has been made to credit organizations and individuals with regard to the supply of information, video, and photo material (in cases where the source wanted to be accredited). Please notify us regarding corrections.

If there’s anything you want to share or comment, contact us through Facebook, Twitter or write to: areyousyrious@gmail.com.

Are You Syrious?

Daily news digests from the field, mainly for volunteers and refugees on the route, but also for journalists and other parties.

Are You Syrious?

Written by

Daily news digests from the field, mainly for volunteers and refugees on the route, but also for journalists and other parties.

Are You Syrious?

Daily news digests from the field, mainly for volunteers and refugees on the route, but also for journalists and other parties.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade