Surveillance and Sovereignty: Reclaiming Privacy in the Age of AI

Dag
tomipioneers
Published in
8 min readMar 13, 2024

Many of you are already familiar with tomi’s mission to challenge the current state of the internet and the established systems that support its dominance. The internet that we all know and love is unfavourably being controlled by a few tech giants, something which has profoundly impacted every facet of our lives.

It’s undeniable that our existence has increasingly migrated online: right now, you’re using the internet to read this article. Beyond that, you likely use it to stay connected with friends on social media, for entertainment on platforms like Netflix or YouTube, and perhaps even for work, whether fully remote or in some hybrid form. My point is, the internet has woven itself into the fabric of our very existence, becoming an integral yet somewhat abstract presence in our lives. It’s a reality that we all live with, yet for many, it remains an abstract concept — visible but intangible, making it difficult to critically assess or even fully comprehend.

Despite its omnipresence, we often overlook the internet’s flaws and choose instead to focus on its benefits, from the endless stream of entertainment to the opportunities it presents for global connection. However, it’s crucial to recognize the internet’s darker aspects. Notably, the data harvesting practices of major tech companies stand out, as they collect our information to fuel highly personalized and intrusive advertising campaigns. These campaigns leverage the most intimate details of our lives, acquired silently as we browse through the online world, to turn them into points of data to be sold without our consent and used against us.

Furthermore, the internet has become a tool for surveillance by governments and authoritarian regimes. While it can serve a positive purpose in tracking down threats to public safety, it also has a sinister side, with instances of journalists, politicians, and ordinary citizens being persecuted for merely expressing their opinions online.

The development of artificial intelligence (AI) introduces a new dimension to these concerns. AI’s ability to process and analyze data surpasses anything previously possible, enabling the handling of vast amounts of information at speeds and with a level of complexity that no human could ever match. This capacity dramatically enhances both the scope and the precision of data analysis, reshaping our approach to information in ways we’re just beginning to understand.

This technological leap also exacerbates worries about the centralized control of the internet — a platform where our data is not merely a commodity for advertisers, but also a means for monitoring every part of our lives. From our daily routines and personal interests to our social interactions and consumption habits, nothing is beyond the reach of this scrutiny.

These pervasive monetization and surveillance practices are precisely why tomi was founded: to develop an alternative internet that prioritizes and truly respects its users. While I may sound like a broken record talking about online privacy and tomi’s alternative internet to our long-time followers here on Medium, I want today’s discussion to take a different angle. I want us to venture beyond the confines of our screens and digital footprints, to instead explore the “real” world around us. In the age of AI, our understanding of public spaces is undergoing a profound transformation, driven by the advent of AI-powered surveillance technologies.

So, we must ask ourselves: What remains of our privacy once we step outside the safety of our homes?

A Closer Look at London’s Underground Surveillance

Let’s start slow and take a dive into what’s unfolding in the London Underground. London, known for its extensive CCTV network — one of the most monitored cities globally — recently upped the ante. The city tested real-time AI surveillance tools designed to spot crimes ranging from fare evasion to more serious threats like weapon possession. During a trial at Willesden Green Tube station, Transport for London (TfL) employed AI to analyze live footage, issuing over 44,000 alerts, with 19,000 directly communicated to station staff. This test, intended to enhance safety and security, marks a significant shift towards more invasive monitoring practices.

However, the implementation wasn’t without its glitches. The AI misidentified children as fare dodgers and struggled to distinguish between different objects, revealing the system’s inherent flaws. Moreover, throughout this experiment, passengers were not informed that they were under surveillance by AI technologies, nor were they made aware that their behaviors and movements were being analyzed in real time. This omission raises profound concerns about consent and the right to privacy, highlighting the ethical dilemmas posed by deploying such surveillance technologies in public spaces without clear communication or public awareness.

While the objective might have been to ensure public safety, the trial raises critical questions about privacy, consent, and the accuracy of such surveillance technologies. The blending of AI with public surveillance underscores a concerning trend: the escalation of monitoring capabilities without a clear understanding of its ramifications on privacy and individual freedoms. The London Underground trial serves as a cautionary tale about the importance of transparency and ethical considerations in the use of AI for public surveillance.

Moscow’s Descent into a Surveillance Nightmare

In Russia, Moscow’s transformation under the ‘Safe City’ project unveils a harrowing scenario where advanced surveillance technologies morph into mechanisms of political repression and intimidation. Originally conceived to bolster urban security, the initiative swiftly became an apparatus of authoritarian control in the wake of Russia’s invasion of Ukraine. With an extensive network of over 217,000 surveillance cameras, the Russian capital has weaponized facial recognition technology against its own people — targeting not just criminals but political activists, protestors, and independent journalists.

This sweeping surveillance regime has effectively nullified the right to privacy and dissent, marking a grave departure from the project’s intended goal of enhancing city life. Particularly disturbing is the utilization of facial recognition to detain individuals even before they can voice their opposition, demonstrating a preemptive approach to quash dissent. The technology’s deployment extends beyond public spaces, hinting at a pervasive watch that follows citizens relentlessly, blurring the boundaries between public security and Orwellian oversight.

The case of Sergey Vyborov, stopped and surveilled simply for his participation in unsanctioned rallies, illustrates the personal toll of Moscow’s digital dystopia. It’s a stark manifestation of how state surveillance, once a tool for ensuring public safety, has devolved into a means of enforcing conformity and silencing opposition. Moscow’s reliance on such technology amidst a climate of political unrest and societal division serves as a dire warning of the dystopian potential of unchecked surveillance power, emphasizing the urgent need for global dialogue on the ethical implications of such technologies in civic life.

Buenos Aires and the Disturbing Reality of False Positives

Finally, turning our attention to Buenos Aires unveils a disturbing reality of facial recognition technology gone awry. The case of Guillermo Ibarrola, mistakenly arrested and detained for six days due to a facial recognition system’s error, serves as a chilling testament to the dangers of unchecked surveillance technology. Arrested at a bustling train station, identified by an unyielding machine as a criminal for an armed robbery he had no part in, Ibarrola’s story is not just a tale of technological failure but a stark illustration of a dystopian oversight.

Ibarrola’s ordeal in custody, where he was forced to endure harsh conditions without natural light, sleeping on cold concrete, magnifies the grim consequences of such errors. The stark absence of human oversight turned what should have been a straightforward verification process into a Kafkaesque nightmare, underscoring the profound human cost of dependence on flawed AI systems for critical law enforcement tasks.

The relentless march towards an all-seeing surveillance state in Buenos Aires, with its aggressive expansion of CCTV and facial recognition technology, has ignited a fiery debate on privacy, the accuracy of surveillance technologies, and the necessity for robust accountability mechanisms. The city’s surveillance efforts, marred by at least 140 database errors leading to wrongful police checks or arrests since the system’s inception in 2019, reveal a harrowing pattern of collateral damage in the quest for security.

As Buenos Aires confronts the fallout from these grave technological missteps, the controversy sheds light on the broader implications of deploying advanced surveillance technologies without comprehensive safeguards or ethical considerations. Guillermo Ibarrola’s harrowing experience is a potent reminder of the imperative to balance the pursuit of security with the preservation of fundamental human rights and dignity in the digital age.

Navigating the Crossroads of AI, Privacy, and Freedom

As we try to understand the convergence between technology, privacy, and personal freedom, it’s more important than ever to broaden our discourse beyond the digital confines of the internet. My intention with this article is not to cast a shadow over the potential of artificial intelligence — on the contrary, I’m an advocate for its capacity to serve humanity’s greatest needs. However, the problem arises when, for too many, their first encounter with AI isn’t through its potential to enact societal good but through unsettling experiences of surveillance, control, and, at times, unjust arrests. This portrayal of AI as an omnipresent, non-human observer, capable of cataloguing our every movement, is unsettling. It paints a picture of a technology that, rather than serving us, seeks to control us.

The instances of AI-powered surveillance discussed here are merely the tip of the iceberg, representing a global trend that challenges our democratic rights to freedom and privacy. This struggle for autonomy isn’t limited to the digital realm of internet browsing, social interactions, or professional endeavors. It has ominously expanded into the physical world, transforming our everyday environments into potential platforms for monitoring and control. Whether we’re casually walking through our cities or engaging online, the sense of being watched looms large.

This emerging reality underscores the necessity for our community here at tomi — those of us who envision a future anchored in internet privacy and data sovereignty — to consider the broader implications of our endeavor. Our pursuit transcends just technological innovation, it’s also a journey toward societal change.

We must remain vigilant, ensuring that our advancements in AI and surveillance technologies do not come at the cost of our fundamental human rights. As we look ahead, let us remember that the world we aspire to create is not solely defined by our technological capabilities but also by our commitment to safeguarding the sociological fabric that binds us. In championing data sovereignty and privacy among other things, we’re advocating for a future where technology empowers rather than oppresses, enhancing our freedoms rather than encroaching upon them.

*Also consider signing the Ban the Scan campaign as a way to take action.

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | Reddit | TikTok | YouTube

--

--