You might not know it, but surveillance is happening all around us all the time. Businesses, governments, and nonstate actors are gathering information — whether it’s to sell us something on social media or screen us when we apply for a home loan. Researchers from both The University of Texas at Austin and New York University participated in a Good Systems panel last month where they discussed various types of surveillance. We caught up with a few of them to find out more about this sometimes obvious (but often elusive) practice. Read what they had to say and watch the full panel below.
Assistant Professor, American Studies
When I began work on my manuscript as a doctoral student at the University of Michigan, I came across a news article that talked about how Predator unmanned military drones were used on the U.S.-Mexico border in the early 2000s and 2010s. I wanted to understand how that came about and how these devices are used on the border.
Drones are really part of a very long process where electronic and digital technologies have been used to make sense of people and goods moving across borders. They help track and monitor these flows as well as figure out where and how Border Patrol agents are deployed. We have to start thinking of drones less as a plane flying above our heads and more as a system within a larger system. Drones are an amalgam of different technologies, including sensors that pick up and transmit infrared, radar, and electromagnetic data. And these technologies are critical in target surveillance.
The use of sensor technologies at the U.S.-Mexico border dates back to the 1970s. Back then, intrusion detection systems, which comprised various ground sensors, were developed by the U.S. military and industry during the Vietnam War to monitor Vietcong movements across the borders between North and South Vietnam. These ground sensors could pick up things such as vibrations and body heat, but they failed to exert control over enemy border crossings because they were easily “fooled” by false triggers. To improve this technology, the U.S. Department of Defense thought it could use the U.S.-Mexico border as an experimental site to improve sensor technology and instate control.
Whenever I engage a technology, I think of the statement by a technician at the Immigration and Naturalization Service: every new technology generates new problems. They don’t just “solve” things. It’s a never-ending dynamic.
Sensors continue to show their limitations. Security officials have declared that the cameras on their drones can only pick up a human form, a shadowy figure. It seems that the Department of Homeland Security would like to have a higher quality image that would lead to individual identification or provide better data about the border environment. Still, the drones today offer enough information for Border Patrol agents to decide where to mobilize if they see a potential target. In this sense, what they successfully do is reinscribe on the border a hunter-hunted relationship. Drones, essentially, are used as part of a political project to create boundaries of belonging and exclusion. But this is ineludibly a fraught process. There are no borders I can think of that are really solid or impenetrable. They are porous and, as a result, somewhat frail. That’s not to say that the efforts to produce borders cannot be lethal and have serious repercussions — because they are and they have. At the U.S.-Mexico border, thousands of people have lost their lives in their efforts to cross to the “other” side.
Whenever I engage a technology, I think of the statement by a technician at the Immigration and Naturalization Service: every new technology generates new problems. They don’t just “solve” things. It’s a never-ending dynamic. So, the allure of technological solutions needs to be tempered by an understanding of the kinds of relations that produce technology and the relations they make possible.
Assistant Professor, School of Design and Creative Technologies
Everything you do online and, to some degree, offline generates data that are being collected by a private company. This includes the websites you visit and the searches you type in. On Google search, for example, this might include products you purchase and loans you apply for. All these different bits of information say something about you and are being used to fit you into different categories to show you different targeted advertising. It is absolutely a form of surveillance because you can’t really opt out of it.
We did a project where we examined different myths people have about how this online surveillance works. A lot of people have the suspicion that their phone’s microphone is listening to them because they have a conversation with someone about something and then see an ad for the very thing they were talking about. In fact, we went about collecting stories from people who had these experiences and created an archive. What’s interesting, though, is that no one is actually listening to or recording your conversations. Instead, these companies are creating a program with a tremendous amount of data that, in a weird way, predicts what you are going to talk about. The reason we get suspicious is because we have a sort of bias that recognizes the one instance where we are talking or thinking about something and then saw an ad for it, and it strikes us. But it’s really just coincidence.
Even so, what are the consequences of this kind of data collection? Well, for instance, it can be used to determine if you qualify for a loan or if you get health care benefits. There is a lot of potential harm that could come of it. The fundamental question is this: what do we, as a society, allow to put in a marketplace to be bought and sold? Personal information and data should not be a commodity. The only way to enforce something like that is to have laws around it that describe the proper use of data.
There are also things you can do as an individual to make it harder, like putting an ad blocker on your browser. But, in the end, there’s not a lot of agency you have as an individual unless you want to completely stop using the internet. The business model of the internet is just so terribly broken. The entire way you communicate with your friends and family are owned by private companies, and they are going to try to make money off you. We need to have communications systems that are not for profit.
Postdoctoral researcher, AI Now Institute, New York University
Tenant screening is one of many facets of the property technology industry, an industry that merges real estate and technology capital. Many in the real estate sector call it proptech, but here I’ll refer to it as landlord tech, as few tenants have ever heard of the industry name. Landlord tech includes an array of platforms, systems, hardware, software, and more, all aimed at “disrupting” the real estate industry with new technologies.
And now the screening process involves AI and machine learning as a way to ban and blacklist certain tenants. There is concern that soon, this will impact tenants who have been unable to pay their rents during COVID-19.
Currently, there are over 2,000 tenant screening companies in the U.S. that gather information about past evictions and merge that data with individuals’ credit histories and criminal records obtained from law enforcement agencies. These systems are extremely faulty in how they join data together, and their outcomes produce reports that are disproportionately used to deny housing to people of color. And now the screening process involves AI and machine learning as a way to ban and blacklist certain tenants. There is concern that soon, this will impact tenants who have been unable to pay their rents during COVID-19.
As an industry, landlord tech saw dramatic changes after the 2008 financial crisis. This was in part due to big investment companies buying up tens of thousands of foreclosed properties through various shell companies. Today, tenants often don’t know who their landlord is and pay rent every month to a shell company, limited liability company, or limited partnership. We at the Anti-Eviction Mapping Project have created an online tool that allows tenants to figure out what properties their landlord’s various shell companies own, and if evictions have taken place in any of them. This helps tenants work together to make multi-building demands and have more collective power. Instead of the landlord having its gaze on tenants, we are trying to flip the gaze back on the evictor and ownership networks so we can stop evictions and displacement. The tool, Evictorbook, is currently available to our community partners in San Francisco, and we’re in the midst of expanding it to Oakland.
Another interesting form of surveillance now taking place in housing involves the use of facial recognition technology. In New York City, there are many large, multi-unit buildings that are implementing facial recognition systems mandating that residents scan their faces to enter their own homes. These systems have long been proven discriminatory to people of color, particularly Black women. It is highly problematic to install these systems in tenant housing, especially in buildings that house people historically targeted by this kind of technology. More often than not, these systems are installed without the consent of tenants and without testing or disclosure of associated harms. In other words, tenants generally have no say even though the technology is being implemented in their homes. Landlords are interested in using these systems to determine if people are breaking their leases through petty violations so that landlords can catch and evict them, causing a great deal of fear among tenants. That said, some tenants have begun to better organize against it, and we at the Anti-Eviction Mapping Project have created a website, Landlord Tech Watch, to better map landlord tech and its associated harms.
To hear more from these experts, watch the full panel below.
Please join us on this journey.
Good Systems is a research grand challenge at The University of Texas at Austin. We’re a team of information and computer scientists, robotics experts, engineers, humanists and philosophers, policy and communication scholars, architects, and designers. Our goal over the next eight years is to design AI technologies that benefit society. Follow us on Twitter, join us at our events, and come back to our blog for updates.
Iván Chaar-López is an assistant professor in the Department of American Studies at UT Austin. His research and teaching examines the politics and aesthetics of digital technologies. He is especially interested in the place of Latina/o/xs as targets, users, and developers of digital lifeworlds. He is currently working on a book, under contract with Duke University Press, about the intersecting histories of electronic technology, unmanned aerial systems, and boundary-making along the U.S.-Mexico border since the mid-twentieth century.
Sam Lavigne is an assistant professor in the School of Design and Creative Technologies at UT Austin. As an artist and programmer, his work explores issues around data, surveillance, policing and automation. He has been exhibited nationally and internationally at venues like the Whitney Museum, Lincoln Center, the New Museum, Ars Electronica, and IDFA Doclab. He was formerly a Magic Grant fellow at the Brown Institute at Columbia University and Special Projects editor at the New Inquiry Magazine.
Erin McElroy is a postdoctoral researcher at New York University’s AI Now Institute, researching the digital platforms used by landlords that surveil, racialize, and evict tenants. She earned a doctoral degree in Feminist Studies from the University of California, Santa Cruz with a project based on the politics of race, space, and gentrification in and between Romania and Silicon Valley. Erin is also cofounder of the Anti-Eviction Mapping Project and the Radical Housing Journal, both projects committed to housing justice and intersections of research and tenant organizing. In the fall of 2021, McElroy will join UT Austin as an assistant professor in the Department of American Studies.