Recently, the headline “Amazon’s facial ID incorrectly identifies members of Congress as criminals” has appeared across numerous news outlets as ACLU published their findings to a study conducted with Amazon’s image recognition tool. The concern revolves around organizations and government agencies using such results to further discriminate people. Amazon was quick to respond and pointed out the number of ‘false positives’ was drastically lower — zero in fact — when modifying a few settings.
We have been in a technological age starting in the 90s and 00s and there’s no stopping anytime soon. Advances are made everyday and as technology evolves, choices like sacrificing either data privacy or innovation are becoming more frequent. This leaves the question: how can we prevent both misuse and abuse of such technology?
Who is responsible?
Who would be to blame for misuse: would it be Amazon, the company that developed the tool for anyone to use, or ACLU, the user that seemed to be misinformed on using the tool as accurately as possible?
In terms of drugs, it is simple to determine who is responsible for misuse or abuse. For technology, it isn’t as easy to determine with so many users using it however they wish. Okay, so maybe not however they wish, after all companies do bans users who violate their terms of service. But these terms only exist to prevent their servers from crashing or to shift responsibility off of themselves.
Legislation does exist now that puts companies responsible of data hosted on their servers. For instance, YouTube regularly takes down uploaded copyrighted content in accordance with the DMCA. With the new GDPR legislation earlier this year, there has been a rush by companies to comply with greater data transparency and control or be cut off from service in the European Union.
Often times, misuse isn’t as black and white. If you want an example, just look at Google who accounted for 90% of the world’s searches in the past year alone. There is compelling evidence they engage in anti-competitive practices — whether intentional or as an unintended consequence of promoting their products — and continue to grow without any repercussions whatsoever!
While ACLU’s purpose was to validate the growing concern over technological means of discrimination, they seemed to point their fingers at Amazon and other developers who provide the means of enabling discrimination. After all, the developers are the ones who can control how their tool can be used. Opponents argue that blame should be on the ‘tool-wielders’ like the police whom are actively discriminating.
Can legislation actually be enforced?
A simple answer to the problem: if either the company or the user engages in malpractice, just pass laws and make it illegal. Yet this reasoning omits the fact that technology is ever-changing and verbiage might be inapt at quantifying such misuse. For companies, a suggestion could be to let the consumers and investors rock their boat. In Google’s case, however, this might have a negligible effect.
The solution needs to be a combination preventative measures and legislation. Legislation should deter harmful practices while preventative measures make it difficult to pursue it. In the case of machine learning, developers and users can easily fail in identifying bias in data and algorithms if not careful. Companies should closely work with certain users in a consulting capacity to specifically avoid misuse and aid in more informed decisions. Ultimately, it would still be up to the users to consider all cautions of using the technology and form proper procedures to account for them.
A solution needs to developed soon by all companies, governments, AND public organizations working together in the best interest of everyone especially with the current pace of technological advancements.