BY VANESSA MISOON

Episode #2: Don’t believe your eyes 👀

--

Recent interventions by ACLU groups in the use of face recognition software by law enforcement, has signaled the era of ‘stop before you start’ grass-roots regulation around the technology.

Use of Face Rec — even talk of the use of Face Rec — in any capacity related to policing or government surveillance — is forcing conversations around civil liberties that have been direly necessary, yet willfully ignored for decades.

But first, read about stuff that made it to the water cooler, and a Slack channel or two, this week👇

Are all Indian doctors super light skinned?

According to the pics that came up on a Google image search, they are 😕. But as global culture comes together through digital connection, we wanna see imagery that mirrors the actual, wonderful, assortedness of people and places from near and far. TONL is actively making this happen for us — by driving the stock photo industry into inclusion❤️…

Them: “Well what did you expect to happen?” Me: 😠

People outside of the crypto community often ‘victim blame’ when they know someone, or have heard of someone who lost their crpyto [effectively, their money] to identity hackers. Well at Kairos we consider early investors in crypto real pioneers of the global economy — and want you know we feel your pain 😿

Don’t be a clown. You can’t beat Face Rec bro. 🤡

First of all, are we ignoring the fact that a face full of dramatic clown makeup makes you the most recognizable person in the room? Annnnndddddd — when we all start wearing scary clown makeup to throw off The Man — the algorithms will begin to identify us as our Juggalo selves. Duh.

Please, and Thank You 👄

Winner? D for Danger? Matte, or gloss?🤔 You’ll never have to dip a Qtip into a maybe not so sterile tube of product again to find out. By integrating augmented reality into mirrors — MAC is making it futuristically easy to see what’s gonna have you out here living your best make-up life 💄

Back to Face Recognition and Governments:

Lately, I find myself reading, and writing, a lot about why government use of face recognition is dangerous and irresponsible. And while I absolutely maintain this position, I can’t help feeling like this technology that is so amazing, and so essential to our digital lives — is getting an unfair, one dimensionally constructed, thrashing.

I almost find myself in the position of having to defend it, like I’ve been hard on it myself — and complicit in the ‘press bullying’ around it. So as I considered what I could offer in defense of face recognition — I recalled an op ed I’d written for a fantastic Criminal Defense Attorney I blogged for many, many, jobs ago. It was an involved piece around eyewitness identification. Specifically, why the criminal justice system’s reliance upon eyewitness identification is dangerous, irresponsible — and needs to be re evaluated. Sound familiar?

“Although witnesses can often be very confident that their memory is accurate when identifying a suspect, the malleable nature of human memory and visual perception makes eyewitness testimony one of the most unreliable forms of evidence.” — Greg Hurley, Knowledge and Information Services Analyst, National Center for State Courts

The grass-roots regulation happening around the use of face rec in law enforcement is being established, at this stage, mainly through municipalities refusing to use the technology in law enforcement due to pressure from the ACLU. This signals the progression of intolerance toward unfair, biased identification processes in a legal capacity, which has, until now, been pretty exclusively acknowledged by groups like The Innocence Project — and been pretty intentionally ignored by the criminal justice system.

The current rate of algorithmic error which frightens and concerns us in the case of using face recognition in identifying suspects, is nearly identical to the human error associated with eyewitness identification, yet the latter has been central to conviction processes for decades. In fact, according to The Innocence Project, eyewitness misidentification is the greatest contributing factor to wrongful convictions proven by DNA testing, playing a role in more than 70% of convictions overturned through DNA testing nationwide. There are countless humans in prison right now, who have been convicted based on eyewitness misidentification. Where’s the outrage????

Certainly my argument here is not “Humans are obviously terrible at identifying other humans, so it’s ok for algorithms to be terrible, too.” It’s not ok for ANY person or machine to have a misidentification rate that leads to an irresponsible, immoral number of wrongful convictions. Ironically, fear of technology has motivated people to call for regulation of face recognition in law enforcement so early in the game, that the exact same monster which human eyewitness misidentification has created — will not even be given the opportunity to be manifested digitally. And I’m here for it. But lets not forget a couple of things —

  1. Human capabilities of identification in criminal cases has been poor for decades, with no foreseeable improvement.
  2. Algorithms, while currently biased and flawed, can be, and will be trained to be better.

That’s all, for now.

The Identity Economy™ is written and curated by the team at Kairos.com.

If you want to learn more about Kairos and how Face Recognition can transform your business — drop us a line 😃

--

--

Kairos
The Identity Economy™ (we’ve moved to kairos.com/blog)

Enabling safe and trusted connections between people and technology, all powered by #FaceRecognition — Gartner #AI #IoT Cool Vendor 😎 — www.kairos.com