Almost Everyone Involved in Facial Recognition Sees Problems

There are multiple calls for limits on this form of AI, but it will be hard for big tech to turn away business

Bloomberg
Bloomberg

--

A display shows a facial recognition system for law enforcement during the NVIDIA GPU Technology Conference in Washington, DC, November 1, 2017. Photo: Saul Loeb/AFP/Getty Images

By Dina Bass

An unusual consensus emerged recently between artificial intelligence researchers, activists, lawmakers and many of the largest technology companies: Facial recognition software breeds bias, risks fueling mass surveillance and should be regulated. Deciding on effective controls and acting on them will be a lot harder.

On Tuesday, the Algorithmic Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face Pledge, which asks companies not to provide facial AI for autonomous weapons or sell to law enforcement unless explicit laws are debated and passed to allow it. Microsoft Corp. last week said the software carries significant risks and proposed rules to combat the threat. Research group AI Now, which includes AI researchers from Google and other companies, issued a similar call.

“Principles are great — they are starting points. Beyond the principles we need to be able to see actions,” said Joy Buolamwini, founder of the Algorithmic Justice League. None of the biggest makers of the software — companies like Microsoft…

--

--