Facial Discrimination: Why Turnbull’s radical surveillance plans warrant a closer look.

This week, state governments were quick to cave to the federal government’s requests to build the “National Facial Biometric Matching Capability” (or NFBMC) — a national Real Time Facial Recognition system using driver’s licence photos along with passport and citizenship photos.

In what Australian digital rights advocacy group Digital Rights Watch is calling a “gross overreach into the privacy of everyday Australian citizens”, all states and territories signed the “Intergovernmental Agreement On Identity Matching Services”, an agreement outlining how governments will “share and match identity information”.

Prime Minister Malcolm Turnbull turned to the media in an attempt to assuage concerns and brought out something that’s half misdirection and half cliche: Facebook already has all your information. “People put an enormous amount of their own data up in the public domain already.” I’m glad he mentioned Facebook, but we’ll get to that.

He continued, stating the new system is necessary for “being able to access IDs swiftly and using automation to do so, rather than being a clunky manual system.

Automation! That’s a fun, futuristic word. Automation is impressive right? When was the last time the government turned to automation to help them out?

That would be this year’s ‘RoboDebt’ debt recovery from the Department of Human Services with the government sending out over 20,000 false or unnecessary debt notices. Almost 1 in every 5 notices was wrong in some way. (You can read more about the RoboDebt and the governments “privacy omnishambleshere.)

Of course, we’re assured the technology behind it is solid and the algorithms it’s running are for our benefit.

Facebook’s algorithms were working for the benefit of its advertisers when — unbeknownst to Facebook at the time — they allowed them to target categories like “Jew hater.”

Following this Facebook’s COO, Sheryl Sandberg, put out a statement, as part of which she claims “We never intended or anticipated this functionality being used this way — and that is on us,

It’s understatements like this one that get trotted out whenever an algorithm is causing a company grief. Just days ago, Google’s search algorithms featured several 4chan threads, spreading false information about the Las Vegas shooter, in it’s ‘Top Stories’ results. When The Outline’s William Turton enquired, Google explained it away as an algorithmic fault.

The biggest problem is that an algorithm isn’t at fault. As Turton goes on to point out;

A truly faulty algorithm would be like a computer program that does not compile or catches itself in an infinite loop. These algorithms are executing; they are doing what they were designed to do.

One major problem with the algorithms used by private companies and governments around the world is the “black box” nature of the algorithms. The term black box is used because there is no transparency, we can’t look inside and see how these algorithms are designed to work or how they make the decisions they make. We can never be sure exactly what they have learned.

When a research team from Mount Sinai Hospital applied Deep Learning principles to a data set of 700,000 patients, they found the resulting system — known as Deep Patient — was unexpectedly accurate at anticipating disorders like schizophrenia. The team was happy with the result but the system offered them no additional insight into how it was reaching these conclusions.

Even when designed by professionals to fit a specific purpose, the black box nature of the Deep Patient algorithm left it unable to explain to the research team how it did what it did.

The more code, the more complexity. The more complex an algorithm becomes, the less we understand about how it’s working. Imagine how much code ran Facebook in 2004 when it launched. Over the years, countless features have been added and Facebook currently runs on 62 million lines of code. At this point, it would be near impossible for companies to provide information about how their own systems work. Who at Facebook can give a clear explanation of how these millions of lines of code reach the decisions they reach?

The fact that these algorithms and systems are designed by teams of humans adds yet another layer of complexity and mystery.

Research from University of Virginia found two image sets frequently used for training image recognition software displayed gender biases relating to actions like cooking and sports. It found within a training set of images, “Cooking” was 33% more likely to involve females than males. When this already biased set was used to train an image recognition algorithm, its likelihood of associating cooking with females jumped to 68%.

When popular app ‘FaceApp’ used artificial intelligence to make your selfies “old, young, or hot”, people found the ‘hot’ feature made people look whiter. Why? Most likely because the AI was trained using a predominantly white database, or using a database in which whiter features were ranked higher (whether consciously or subconsciously.)

But don’t worry, FaceApp’s CEO apologised (in a manner similar to Facebook and Google), “It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour.

As AI systems take over more complex tasks, the dangers become more apparent.

FaceApp using a biased training set meant they accidentally made some people look white which is a definite problem but not life threatening. But what if our government’s Facial Recognition software is trained off that same set? Or another set with unrecognised biases.

A March 2017 hearing in the U.S. House of Representatives found that the Facial Recognition Technology being used for years by the FBI “has accuracy deficiencies, misidentifying female and African American individuals at a higher rate.

The next thing to take into consideration is where these systems are coming from. As digital rights campaigner and Cryptoparty founder Asher Wolf found, Australia’s Department of Immigration and Border Protection, as well as the South Australian and Northern Territory governments are using NEC’s NeoFace technology. (The brochure is a great read too. APIs, tech specs, mobile apps. All features which I’m sure have no capability for misuse or scope creep.)

You might be thinking “NEC? That’s fine. They made my microwave!” or you might have heard pro-Facial Recognition parties saying “We already have systems on every police car to scan every licence plate!” and they’re both… points.

When The Intercept’s Ava Kofman wrote about the current push from law enforcement agencies in the US to link Facial Recognition databases with police body cameras, she heard from experts fearing such systems are ripe for overuse, creeping scope and profit motives.

As an example of this, she looked at the development of Automatic Licence Plate Readers (ALPRs).

ALPR systems, which capture and digitize license plates, were originally pitched as a way to reduce car theft. But with auto theft declining, it was hard to justify the technology’s high cost, and so a private company, Vigilant Solutions, cooked up a scheme to offer it to departments for free. But in exchange, municipalities give Vigilant their records of outstanding arrest warrants and overdue court fees, which the company uses to create a database of “flagged” vehicles. When ALPR cameras spot a flagged plate, officers pull the driver over and ask them to pay the fine or face arrest. For every transaction brokered between police and civilians pulled over with flagged plates, Vigilant gets a 25 percent service fee.

It suddenly doesn’t seem like as big of a stretch for a future government to want to make some money back from this system that costs millions a year to run. Luckily, the Intergovernmental Agreement On Identity Matching Services has a handy Part 5 titled “Private Sector Access” detailing how private organisations can get access to the Document and Facial Verification systems outlined in the document.

It’s this kind of ‘feature or scope creep’ that Australian Privacy Foundation board member, Professor Katina Michael, points to as an established trend in technology.

“It’s not going to take long for these systems to be hacked, no matter what security you have in place and once it’s hacked, that’s it — everyone’s facial images will end up on some third-party selling list.”

The best local example of scope creep is easily seen when looking at the Abbott Government’s Mandatory Data Retention scheme. The measures were introduced “ in the name of protecting the country from terror threats and were a response to the increase of Australian jihadists fighting overseas and local attacks.”

During the first months of this scheme, there was 333,000 authorisations made for retained data with the majority of it’s use targeting illicit drug offences (over 55,000 requests), the use of the system turned quickly away from terror (4,400 requests) to robbery, theft, fraud and property damage.

The best global examples of countries rolling out widespread Facial Recognition systems currently come from places like Russia or China, whose 176 million cameras and top of the line facial recognition software are already being morphed into a Minority Report style network. As Bloomberg Technology Asia’s David Ramli tweeted, China uses this system to catch jaywalkers, fine skippers and dissidents. He summarises the danger of scope creep nicely in his next tweet:

And as Malcolm Turnbull was so quick to point out, there can be no “set-and-forget” national security laws, announcing a follow up summit next year and essentially setting the stage for future changes to the just announced system.

During the remarks given after all states and territory leaders had acquiesced, Turnbull stressed that we need to be able to identify people who are suspected of terrorist offences or terrorist plots in real-time.

Terror was also the crux of his Daily Telegraph Opinion piece, which opens contrasting the safety of our Football grand finals against the events that unfolded in Las Vegas. He even goes as far to pull out such alarmist rhetoric as “Any of us could have been there” after which he launches into his pitch for facial recognition and changes to keep us “safer”.

It is without a hint of irony that he breaks down terrorist as those who “seek to frighten us (or) to intimidate us to change the way we live.”

In 2006, two Australian researchers put forth a theory of ‘thought contagion’ positing that terrorism causes fear and anxiety far beyond where it occurs. This is why many people living in Australia believe terrorism to pose a significant threat.

“In Australia terrorism is only a remote possibility yet politicians, the media, and the public often argue that terrorism poses an immediate threat.”

A study from ANU last year found public views on the appropriateness of counter-terrorism measures with a potential direct impact on civil liberties are shaped by a perception of imminent threat. In a poll run as part of that study, 45% of respondents reported they were concerned about themselves or a family member being a victim of a future terror attack in Australia.

As Greg Austin, Professor of UNSW’s Australian Centre for Cyber Security points out, although Australia’s National Terrorism Threat level is at Probable, (a level never reached before September 2014) “more Australians have died at the hands of police (lawfully or unlawfully) in ten years (50 at least from 2006 to 2015) or from domestic violence in just two years (more than 318 in 2014 and 2015) than from terrorist attacks in Australia in the last 20 years.”

Queensland’s Privacy Commissioner, Philip Green warns that these moves from the government demand a proper debate before risking “mass surveillance, screening and predictive policing” before finishing by asking,

Do we trust the federal government, given they’ve had about five failures in the last couple of years on delivery of this sort of infrastructure — should we trust them?

After all you’ve read today, do you?


From data breaches in governmental agencies such as Department of Immigration and Border Protection, Australian Federal Police and the 14 reported data breaches the Australian Bureau of Statistics has suffered since 2013, the Australian Tax Office losing 1 petabyte of data, 2016’s #CensusFail, the leak of a dataset containing information on 96,000 public servants, the availability of Medicare Cards details for purchase on the Dark Web, or when a “de-identified” dataset of 10 per cent of all Medicare patients and the medical services they had accessed from 1984–2014 published on data.gov.au was found to contain enough information for re-identification — a breach serious enough to prompt the government to create a retrospective law making it illegal to attempt to do so — the Turnbull government has proven time and time again that it is unable to be trusted with technology.