AI — Is Our Privacy Private Anymore?

Peter-Emil Georgiev
Writing for the Future: AI
3 min readAug 2, 2018

By Peter-Emil Georgiev

Photo showing how China’s facial recognition system works. Courtesy of New York Times.

Imagine you are walking in China and decide that it’s not worth it to wait for a green light on a crosswalk. In such a case, you will be identified by an AI and your information will be shown on a billboard, shaming you, and releasing personal information to the public.

The Chinese government has been integrating facial recognition in public spaces to identify jaywalkers or any criminals the system might encounter.

Not only facial recognition, but also AI as a whole can and has already brought a lot of risks to our privacy, through all the new rising technologies, people’s security isn’t enough to control or stop it.

AI has been edging it’s way into our lives and can now be found in almost everything we use — personal assistants in our homes like Amazon Echo, Alexa, Apple’s smartphone companion Siri; it is now a valuable part of our cars as well. With that, we are bringing many risks we all need to consider before fully trusting it with our information.

Rashida Richardson, an expert on ethical questions that may concern AI, has mentioned in a talk with high school students exactly why companies choose to neglect the free data sets they are given to teach their AI — they are often flawed and biased and that is translated into the artificial intelligence.

This is the reasoning behind the decision most of these companies make to “spy” on their customers through personal assistants.

An example of that theory is how almost every major tech company has begun to come out with a new AI assistant that responds to having heard it’s name. What we can assume from that is that it “listens” to everything you say and analyzes it, so it can respond at the appropriate time. The companies use this tech to collect more data straight from people’s homes, which provides better and more diverse data.

Such accusations towards assistants have been voiced by journalistic sources like the Daily Mail and The New York Times.

While it may sound like a good idea for how to teach neural networks to work better, such data gathering is in conflict with the customers´ privacy. Such intentions might be mentioned and even outlined in the privacy policy that comes with the product, but it’s very often skipped and disregarded.

Considering all this, we still need to think how the data sets that companies have access to are improved and any bias is removed.

AI has become and is becoming a bigger part of our lives and we are slowly beginning to rely on it — People are using personal assistants more and more often and the interest around self-driving cars is higher than ever. With all that, the risks have also increased.

People have been trusting technology with pretty much all of their personal information, posting pictures on Facebook, voicing their opinion on twitter ,and so on. In this day and age, it is really easy to learn anything and everything about someone, especially with facial recognition.

Facial recognition can be very dangerous, just like the example in China shows. With a simple camera and an AI, you can reveal anything about a person, even possibly wrongfully recognizing them as a criminal.

This has sparked a debate about how much privacy we can really have in a public space, and if this technology should even be allowed in public at all. It can be argued that in a public space, no one really has any privacy. On the other side it can be said that everyone has the right to their own personal space, and no one should be allowed to learn so much information about someone.

With this much new technology, people need to work on a solution to the security risks that they bring. New technologies — such as the blockchain and quantum computers — may help resolve some of these issues.

Blockchain, specifically, can track every transaction made by users, while keeping the participants essentially anonymous. Financial services companies are already using this technology to improve security.

Quantum computers, also, are a new step in the development of computers, and can be much times faster than what we have. That gives them the advantage to be more powerful and therefore can overcome previous methods for improved security, as well as better encryption.

--

--