Surveillance Corporations

Baker Nanduru
6 min readMay 5, 2023

--

Photo by Jessica Lewis Creative: https://www.pexels.com/photo/boy-wearing-a-black-and-white-virtual-reality-goggles-3391378/

Over the past century, new consumer technologies have consistently raised privacy concerns. However, since 2000, companies have collected increasingly large amounts of user data to influence and monetize their customers. With the proliferation of AI and the imminent arrival of the Metaverse & VR devices, there is now an unprecedented amount of user data available, which can be used to track users and influence their behavior in previously unimaginable ways. To preserve privacy in this new technological landscape, it will be necessary to develop and adopt new privacy-preserving encryption technologies, implement timely and meaningful regulations on these new technologies and drive awareness of privacy risks to a new generation of consumers.

New consumer tech brings new and advanced privacy concerns

For the last 150 years, every new groundbreaking consumer technology has created more opportunities to collect and harvest consumer data by corporations and governments. The data privacy concerns, originating from the corporations, have been consistent but the impact on consumers has gotten exponentially worse in the last two decades.

With the advent of telephones, consumers were worried about organizations ranging from corporations, and governments, intercepting their conversations, with the advent of cameras, consumers were worried about other people taking pictures without consent, with the advent of radio and television, consumers were worried about organizations tracking their usage behavior, with the advent of computers, consumers were worried about others stealing their sensitive data. The privacy concerns were limited to stealing medical records, or credit reports, and organizations monitoring their employees’ activities in the workplace. These concerns were valid but impacts were limited to a very small number of consumers.

With the advent of the internet, the magnitude of privacy risks has amplified. Now thousands of websites collected user data without consent. Previously public records were stored in archives and could only be accessed in person. Now with the internet, every organization has personal information stored in a digital format prone to identity theft and other privacy violations. With the advent of mobile devices, we store most of our personal information in a single device, creating additional privacy risks when a device is stolen or lost.

But with the advent of social media, consumers were able to connect with more people and companies were able to collect vast information on user behavior. Third-party cookies were the surveillance tools used by corporations to track users, their behaviors, and their intent. With this user tracking, corporations were able to influence you through advertisements or political agendas. Mobile devices and wearables provided additional personal information like audio, video, location, physical activity, and biometrics. With smart assistant devices like Alexa and sensors like Ring etc, corporations can capture what we say and do in real-time.

AI and Metaverse take the concerns to a whole new level

We are in the early stages of AI development but with the rapid progress in the last three years especially with the new popularity of ChatGPT. AI is adopted by every enterprise and every government. We now see AI applications embedded in our daily life like self-driving cars like Tesla, Ring video cameras, wearables like iWatch, Tiktok & Instagram feeds, Nest home thermostats, Alexa virtual assistants, Shopify e-commerce sites, and our beloved Android & iPhones.

Behind this AI, some models need lots of training data. If an AI model can recognize a car, then the model is trained with lots of images of vehicles behind the scenes. If an AI model recommends you a video, then it is trained with what users have watched previously who are like you. This training data esp. When you are targeting humans requires lots of personal and behavioral information. These large data sets, though locked in siloed walls of organizations, have risks of data breaches. This data is prone to profiling, misuse, breaches, and surveillance. Despite their best efforts and intent, most renowned organizations including Meta, Google, and Microsoft had critical issues with protecting user privacy.

Virtual Reality devices are the next-generation devices that are in the early stages of adoption. These devices have been around for a while but Meta, Microsoft and eventually Apple will turn these devices into massive socially connected platforms.

Based on recent research conducted by researchers at UC Berkeley, virtual reality devices capture unparalleled user biomechanics data as a stream that has significant privacy ramifications. VR devices capture the body movement, eye motions, sound, vitals, and facial expressions of the user in real-time into a stream to immerse the user in an immersive experience. If you have used Oculus or played a VR game, you know what I mean. Meta Labs shows that just sparse data tracking like just the user’s head and hand movement can precisely predict the user’s full body movement. AI models with this VR platform data can accurately profile users, users’ emotions, and possible health issues.

The UC Berkeley research work showed that biomechanics may serve as a unique identifier in VR, on par with widely used biometrics such as facial or fingerprint recognition. We are entering a new era of “who we are” from passport, ssn information to fingerprints, and facial recognition to biomechanics. The researchers found that after training a classification model on 5 minutes of VR data per person, a user can be uniquely identified amongst the entire pool of 50,000+ with 94.33% accuracy from 100 seconds of motion, and 73.20% accuracy from just 10 seconds of motion. This is quite scary.

Now the platform providers know more about the users than the users themselves. Based on the user data collected over time, the platform providers can create behavioral and emotional models to predict what a user will do in a wide range of circumstances. This understanding will help influence the user towards the individual outcomes the platform providers want the users to achieve e.g. purchase a product, influence, or other monetization tactics. The scary part is that the user doesn’t even realize how a platform provider is directly influencing the user’s life, all in the name of making more profits from the user’s personal information.

What can we do?

With the advent of AI, VR devices, and metaverses, corporations will become the primary threat to privacy in the name of monetization. While this is partially true today, corporations will become so powerful that consumer privacy is mostly dead. Here are three areas where we can extend our hope.

Firstly, US and EU should define comprehensive privacy legislation for AI and metaverse in the next year.

Privacy legislation has been around for more than 50 years. The legislation started from the fair credit reporting act in 1970 to regulate the use of credit reports, to the privacy act of 1974 for federal agencies, to the communications privacy act in 1986 to regulate government access to electronic communications and to children’s online privacy protection act in 1998 which required parental consent before collecting PII from kids. There were three big pieces of legislation in the 1970–90s.

In the 2000s, while consumer technology rapidly evolved and privacy concerns magnified, the new legislation was much slower and incomplete. In 2003 FTC issued guidelines on the collection and use of PII by commercial websites. Snowden revelations in 2013 brought more global awareness of privacy issues and the eventual enactment of a powerful GDPR legislation in the European Union. These regulations were inadequate and couldn’t protect consumers from corporations’ mistakes and misbehaviors.

Now more thoughtful legislation is required to address the most existential privacy risk with AI and new immersive platforms.

Secondly, the New privacy technologies listed below need to mature rapidly and reach the mainstream. There are many new technologies, techniques, and designs like zero-knowledge proofs, homomorphic encryption, self-sovereign identity, privacy-focused browsers and search engines like DuckDuckGo, Virtual private networks or encrypted messaging for secure communications, and anonymization & pseudonymization techniques,

In the last decade, most consumer-focused privacy technologies like password managers, VPNs, or others haven’t reached mass adoption unless platform providers like Apple provided privacy by design or a platform provider like Google provided built-in password management in Chrome browser. While these new mass privacy technologies need to become mainstream and widely adopted, the technology alone is inadequate to safeguard all the past consumer privacy sins.

Lastly, the younger generation of consumers should be more aware of their privacy risks and be vocal in shaping the future of privacy. This is too much to ask from a new generation that grew up with TikTok, Insta, and Snapchat. The new generation of users doesn’t care much about privacy and is willing to trade their privacy for great experiences and convenience. All it takes is a spark to start a wildfire. The younger generation needs a privacy spark to take back control of privacy before AI and the new immersive platforms become mainstream.

The consumer privacy situation is sad and is about to get worse.

--

--

Baker Nanduru

Transforming lives through technology. Checkout my product leadership blogs on medium and video series on youtube.com/@bakernanduru