Constant Vigilance: Steps to secure privacy in the age of AI
As a research engineer who has recently filed a patent on 3D computer vision, I often find myself pondering the ethical implications of my work and its application. Research and development is a long and tedious process. Often, there are abortive projects, experiments can be rigorous, and the movement from idea to implementation contains a profusion of intermediary steps before one can actually see the fruits of their labor.
However, once development gets passed over to engineering, ideas that were once purely conceptual quickly turn to reality — and can quickly spin out of control. That is precisely what we see with facial recognition systems.
The proliferation of this new technology can be seen quite literally everywhere. The U.S. Army has been working on blocking data poisoning in facial recognition, Chinese firms are working on recognizing faces underneath masks, and London has just started utilizing the tech in shopping centers. As recent reporting on Clearview AI has made palpably clear, facial recognition systems intersect with and impact all our lives. However, I haven’t seen very many discussions of Clearview from those who actually research computer vision. I’d like to try and fill that gap.
Since most of us have already heard of Hoan Ton-That’s Clearview AI, I’ll save chatter around what the app is and how it works for other publishers and get to the point (if you do need a refresher on Clearview, please see this article). The fact of the matter is this: the advancement of facial recognition technologies is unprecedented and unpreventable. Many fear that their privacy is now jeopardized because of Clearview, but the reality is that it has been for a very long time. Clearview is far from the only app — or even the first app — that uses facial data from the general populace. Such technologies have been in use for quite some time.
With this in mind, there are three things, which I’ll call “pillars,” that people need to understand in relation to computer vision and, more specifically, facial recognition technologies.
Pillar 1: Even if privacy is a right, countries cannot guarantee it
We must understand that, in relation to privacy, countries can only offer policies — and guarantee rights — that are aligned with what technology can provide. Yes, people are interested in having their data protected, but laws can only do so much once algorithmic growth (that is, the natural evolution of machine learning) has reached the point of no return.
Once there is a learning process that enables machines to do a specific task, there’s no stopping it. It’s merely a matter of trying to steer the growth of these functions in the right direction. There’s a difference between effectively controlling something and merely managing what’s already been released and/or manipulated.
In this respect, while experimenting, research engineers must be diligent in their attempts to consider ethics as it relates to all potential applications. This task is not just in the hands of business, sales, or product managers. In fact, it should be mandatory that artificial intelligence research publishers take part in ethics considerations courses to ensure they are versed in the ways that R&D can affect the production chain and real-world outcomes, the same way that many doctors and lawyers do for their practice.
Pillar 2: You must work with AI
We must understand that policy, research, and ethics are united in a cyclical relationship — a feedback loop that requires constant iteration and rigorous interrogation. We cannot turn our attention away from any one of these. Individuals must work with technologies, not against them. To do so effectively, individuals must proactively engage with all three components of this feedback loop.
On a more philosophical front, technology is, in a sense, a level of intelligence — should it be treated as such? We have reached a point where the internet contains an amalgam of data that it can work with, improving its algorithms and making it more inherently “intelligent.” Users should be informed — both of the benefits and the possible dangers of interacting with such systems. That is the only way to work with AI effectively.
“People must be informed of the dangers of the technologies they are interacting with. They must be aware of their capabilities.”
Pillar 3: UX Research — bread and butter of AI R&D in the 2020s?
This leads me to my last point: the understanding of products and their dangers, if any, are responsibilities that UX researchers must include so that transparency is provided to persons interacting with them. It’s essential for people to feel safe with the technologies that they are utilizing.
As previously mentioned, some of these advancements are imminent. The usability of products is of dire importance when interacting with the system.
Once the public becomes more aware of these concepts, it will inevitably become easier to deal with the many more applications that will mirror Clearview’s in this decade onward, understand how to avoid and or ban them, and how companies can start to enforce regulation around the pitfalls of novel R&D.