A Tale of Two (Small) Victories

Elizabeth M. Renieris
Feb 7 · 3 min read

What do BIPA and SyRI-related developments tell us about the future of privacy?

Photo by Annie Spratt on Unsplash

Despite an otherwise dystopian start to 2020, there have also been a few glimmers of hope in the early weeks of the new decade. These recent developments light a path forward in an otherwise dark digital future and remind us that the solution to our present woes may be hiding in plain sight.

Much of this hope stems from a small (< 1500 words) but mighty piece of legislation — the Illinois Biometric Protection Act or BIPA. BIPA prevents companies from collecting the biometrics of individuals without their prior informed written consent and prohibits the sale and profiteering off of biometric information.

In January, the Supreme Court denied Facebook’s petition to hear Patel v. Facebook, in which the plaintiffs alleged that Facebook’s facial recognition-enabled photo tagging suggestion tool violated their rights under BIPA. While Facebook argued that plaintiffs could not demonstrate a concrete injury, the Ninth Circuit found that the mere collection of the plaintiff's biometric information was a sufficient harm under the statute (i.e. an injury-in-fact). Approximately one week later, the tech giant agreed to settle the case for a record $550 million.

Patel may just be the tip of the iceberg in terms of BIPA-related litigation and class actions, which are proliferating. For example, controversial facial recognition startup Clearview AI is facing multiple class-action lawsuits under similar allegations that it violated plaintiffs’ rights under BIPA by scraping “publicly available” images of the individuals off of sites like Twitter, Google, and Facebook, among others. To date, at least three actions have been filed.

Incidentally, some of the sites whose images were scraped have responded with cease and desist letters to Clearview AI, demanding the images be taken down or deleted — demands predicated on alleged violations of their own terms and conditions (Facebook was the last to follow suit, perhaps because Facebook’s story literally started with the unauthorized scraping of images via its predecessor site Facemash). Nevertheless, the more interesting and relevant battles are happening in the courtroom and not in some white-shoe law firm’s conference room.

Just yesterday, the Court of the Hague in the Netherlands ruled that the use of an automated fraud detection system to enforce welfare benefits — known as “SyRI” — violates an array of human rights, including Article 8 (on the right to respect for private life) of the European Convention on Human Rights (ECHR) and Article 17 (on the right to respect of privacy, family, home and correspondence, and protection of honour and reputation) of the International Covenant on Civil and Political Rights (ICCPR).

More specifically, the Court found that SyRI’s implementation violated the principles of necessity and proportionality — two fundamental principles in international law — because the nature of the system, including its broad application, lack of transparency, and violation of the purpose limitation and data minimization principles, was not necessary or proportionate to its intended purposes (namely, fraud detection). Per Article 8.2 of the ECHR, those purposes could not sufficiently justify the interference with the private lives of individuals.

As Privacy International so eloquently put it, “the SyRI ruling marks the beginning of the rights-based resistance against the surveillance of welfare claimants.” But I would go even further. The Dutch SyRI ruling, coupled with recent BIPA-style developments in the U.S., mark a widespread rights-based resistance against the emerging digital dystopia. These recent developments speak to the strength of established human rights laws and fundamental principles to protect us from the as-yet unregulated risks posed by rapidly emerging technologies like facial recognition and black box algorithmic processing.

The emerging case law shows us that we don’t need #ownyourdata campaigns or fancy data-as-property proposals to preserve our privacy, dignity, and autonomy. Rather, we need our lawful institutions to apply decades-old laws and principles to new and emerging technologies through an unchanging lens of our shared humanity. Despite a few panicked years of drowning in data and grasping at straws to govern this new reality, it may well be that the answer has been right in front of us all along.

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Elizabeth M. Renieris

Written by

Founder @ hackylawyer | Fellow @ Berkman Klein Center for Internet & Society at Harvard | Privacy Pro (CIPP/E, CIPP/US) | Privacy, Identity, Blockchain/DLT

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

More From Medium

More from Berkman Klein Center Collection

More from Berkman Klein Center Collection

More from Berkman Klein Center Collection

Optimization over Explanation

More from Berkman Klein Center Collection

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade