Understanding Computers and Society — Privacy and Freedom

Questions We Need to Answer Right Now — Part 2

Saif Uddin Mahmud
Dabbler in Destress
11 min readOct 23, 2019

--

Photo by Bernard Hermant on Unsplash

Privacy

Perhaps the most popular topic when talking about the increasing amount of tech we surround ourselves with, the debate on privacy has raged on for years. Privacy is not universally defined: it may mean the right to be alone (Warren and Brandeis), it may mean secrecy, it may mean the control over personal information. Furthermore, privacy is hierarchical, concerns different types of data (personal, health, relationship, political, etc) and is differently interpreted across cultures. For this writing, we will define privacy as the right to protection from intrusion and misuse of your personal data.

[This article is part 2 of a series of articles exploring the various dimensions of computers that give rise to questions pertaining to society. You can find part 1 (Introduction and Ethics) here.]

Privacy, as a concept, has been evolving over thousands of years. Pre-modern people stopped having sex in front of their children, ancient cities started offering a visual barrier in lieu of public latrines, and it started being rude to snoop around people’s snail-mail only in Victorian Era England. Privacy in 2019, according to proponents, let people find their own way in life within a set of socially approved laws. It gives you freedom in shaping your own future and hopefully become successful, and shields society from taking total responsibility when you fail.

There has been an exponential increase in computing power and data collection/analysis capabilities over the past couple of decades. Everything around us is collecting data about us. Google, Facebook, Amazon have over a dozen attributes they track through browsers, smart devices, and wearables. These include — but are not limited to — location, device type, browsing pattern, cookies, and sessions to “personalize” what you see, what you hear, and what you buy. The data-brokerage and analytics industry is so lucrative, thousands of businesses have sprung up in this field over the last decade. The corporate world isn’t the only one taking advantage of data though…

The September 11 attacks brought the privacy versus security debate front and center again. The Bush administration promptly passed the Patriot Act, which allowed intelligence agencies to collect and analyze metadata, wiretap people and search without warrants. The public opinion was behind the bill due to a surge in emotions; people were willing to give up some privacy even if there was a small chance of avoiding future attacks.

In 2005 news broke out that NSA was spying on American citizens too, but little changed. In 2013, Edward Snowden — former CIA agent — leaked NSA documents showing the amount of overreach and the total lack of congressional/judicial oversight. Tech companies were outraged that their technology was used “unconstitutionally”, causing consumers to doubt their trust. The international community was outraged that world leaders were being wiretapped by the US, heavily damaging diplomacy between countries and allies. It was ironic because it came to light soon after that the European Agencies were also actively spying on Americans.

It’s important to recognize that you can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience. — Barack Obama, 44th POTUS

Obama agreed that a much needed international privacy dialogue was enabled by his rebellion, but Snowden fled fearing a treason-trial. The US Senate, US Congress, US Courts worked over the next few years to address the issues. The problems are far from resolved, and there hasn’t been enough progress. While this surveillance fiasco raves on, tech companies haven’t been sitting idle either.

Roger McNamee — businessman and early Facebook investor — summarises the concerns people have about tech giants clearly in this SGBS talk in 2019. Businesses have realized that in the age of the internet, data is money. With all this data you can optimize billions of dollars in ad revenue (Economic Edge), change public opinion to swing elections (Political Edge), influence people’s actions (behavioral manipulation), and slowly control every aspect of people’s lives. He claims that social media and search engines are myopically promoting practices that capture eyeballs, residual attention, and boosting polarizing content. All this without considering the ethics of large scale AB testing without consent (did you read those terms and conditions?!), the ethics of behavioral manipulation, the impact of having people glued to their screens, and the effect of “alternate-news”.

Yet, perhaps due to how young the field of computing is, we do not have a congruent, complete set of universal laws that protect the privacy of consumers. There are no standards and the industry is heavily underregulated. Recently, the issue has gained a lot more attention from governments around the globe due to obvious national security implications. We’ve seen recent regulations like GDPR in the EU, and PDPA in Singapore — but this is barely scratching the surface of the problem. Better late than never, I guess!

An extremely shallow argument put forward by “opponents of privacy” — if I may, for lack of a better word — is “why care if you have nothing to hide”. This, in turn, enables dangerous avenues for manipulations that can range from disgusting “digital kidnapping” of babies by virtual pedophiles, echo-chamber and fake-news spread due to big-tech, and Orwellian Societies that are springing up in China. I will admit, however, that privacy is a really sensitive, important, and complicated issue that concerns virtually everyone on this planet. Privacy is freedom, a fundamental human right. And that is exactly why we should all be involved in this conversation, no matter how aloof we are about our privacy.

It’s extremely difficult to get out of the data ecosystem and stay “off-the-grid”. We have Alexas and Echo Dots everywhere; everyone carries smartphones and watches that record audio all the time; most places are well covered with CCTV cameras; everyone browses the internet; everyone shops online; everyone consumes news online. Even if you care enough to use TOR and DuckDuckGo and run on hardware that lets you disable the mic, camera, location tracking, etc., you have barely any control over who gathers your data from other sources, who this data is sold to, and what this is being used for. Another common “fix” put forward by companies is that they give you an option to download and delete all your data. Does Google/Facebook/Amazon/*insert-Tech-Giant-Name* really delete your data when you ask it to?

Do we really have privacy? Can we stop mass surveillance when it is so easy to do? Can we stop the economic forces when they lobby against freedom? What can we, as individuals, do about it? Will the government and industry collaborate to solve the issue, or scheme against us common folks?

I’ll let you ponder the questions over 3 very recent cases caused by allegedly-well-intentioned tech giants: Facebook’s Cambridge Analytica Scandal, Google listens to your Google Home Recordings, Tesla claims drivers at fault but won’t release data.

Freedom

Ultimately, the core concept we’re worried about is Freedom. “Freedom, generally, is having the ability to act or change without constraint” according to Wikipedia. We all understand the concept of freedom and free will innately, so we won’t dive deeper into what freedom means. Instead, we will look at freedom — more specifically technological freedom — along three dimensions: access, autonomy, and accountability. We will look at the debate of Free Speech using these lenses to make the debate clearer.

Access to technology is the ability to use it. Most of us do not think about access that much, because most of us are reading this on a screen that we likely own. So we can enforce our right to free speech and participate in the internet today. This is not the case for all 7 billion residents of Earth. Think about people without access to electricity/electronics/internet because it has not reached them, or they can’t afford it. Think about the people who have visual disabilities, people who are deaf/dumb, people who have medical conditions that do not let them use common computing peripherals or people on the autism spectrum. Think about people who are illiterate or do not understand the dominant language of the internet — English. There has been considerable work in these domains over the past few years, with interfaces being made for the blind, UX work making devices easier for the people in the Global South, grand projects launched to provide electricity and internet to everyone on earth as digital devices become cheaper and ubiquitous across all continents. We have come far, but we have a long way to go.

Autonomy is the ability to use it for whatever end you desire. Just because a piece of technology is accessible to you doesn’t mean you can use it for whatever you like — at least that’s the case for a lot of people in the world. In places like China, you can’t post anything against the government online without getting your “social score” decreased. In South Asia, speaking out against sexual harassment is not the norm. Cyberbullying stops you from participating online all over the world. Social media bans are common in authoritarian regimes. These examples directly restrict free speech. Social and political restrictions can hinder your freedom when using technology.

Accountability is the obligation to accept responsibility for your actions. This is where the freedom of speech debate has the most clashes. There is no universal way to balance freedom of speech with important metrics of security and safety. A utilitarian would think about reducing harm when talking about free speech versus regulation. Herein lies other problems as well. Should we restrict Nazi movements? Should we restrict anti-Israel movements? What about political speeches, since they perpetuate hatred too in today’s populist and/or polarizing rhetoric? Other forms of hate speech? Mass-shooter manifestos? Depending on who you ask, the range of emotion felt varies from anger to disregard to happiness. The challenge with free speech is not about whether we should start regulating at some point — most people think we should (think about whether you want ISIS/KKK propaganda to reach your kids). The problem is about where should we draw the line.

In a more day-to-day scenario, we see social media companies struggling with mediating content in such a way as to keep it appealing and fresh for its large userbase while nurturing freedom of expression. Common demarcations used are nudity, violence, pedophilia. More difficult topics are more nuanced: breastfeeding, fake-news outlets, targetted ad campaigns and so on. Little regulation has enabled good, such as rallying protesters and ousting dictators during the Arab Spring or helping the Hong Kong protesters unify. It has had horrendous effects such as White Nationalist manifestos on 4chan, live videos of mosques being gunned down in Christchurch and foreign meddling in the 2016 US elections. In a harrowing talk titled “The price of a ‘clean’ internet”, Block and Riesewieck share a glimpse of the terrifying lives of ill-paid, off-shore “content moderators” for tech-giants like Facebook, Google and Twitter. These people are bound to strict secrecy as they run through thousands and thousands of pictures/videos of rape, mutilation, murder, massacre, war, etc. and decide whether to “Delete or Ignore”. Aside from the mental health issues these young people, as young as 20, develop over time, they raise a question about the validity of censorship. If these moderators are censoring images and videos of war-crimes happening in Syria, the duo argues, is that really fair? Does the world not have the right to see the world for what it actually is, and then take appropriate actions. Wasn’t the whole point of the internet to democratize power? Instead, what we have today is a dystopian world where Big-Brothers have the ability to control what we see.

Similarly, Social Media Surveillance has had good effects such as more effective public policy, prevention of suicides, catching of criminals — and even more of such surveillance could stop mass shooters and stop crimes before they occur. But at what point do you stop? Does this not have the potential to become dystopian like Minority Report? Is it okay to keep your citizens under surveillance? Is it okay to keep people of other countries under surveillance? How much power is too much power?

Such surveillance can be very dangerous in the wrong hands. Protestors in Hong Kong have gone through crazy precautions just to avoid detection and future retributions. They’ve made sure they cover even their ears during rallies; they carry umbrellas to render CCTV cameras useless; they put the cards in their wallets inside aluminum foils to avoid RFID tagging; they buy one-way tickets to get on the MRT; they carry lasers to flash at long-range police cameras — all to avoid identification. They do not enable fingerprints or faceID on their phones so that authorities can’t forcefully access their phones when they’re arrested; they use VPNs and anonymous chatting apps to disseminate information. Even after all these, they realize it most likely isn’t enough to stop the Chinese Government.

These technologies enabling mass surveillance and analysis of a tremendous amount of data in real-time were not present until recently. Thus, we never had to ask these difficult questions. The libertarian tech community was furious when Snowden revealed US surveillance operations. Where is that outrage now, years down the road, when they continue enabling better and better technology in this field?

Photo by Joseph Chan on Unsplash

Recently, the dialogue of net neutrality also stole the spotlight briefly, yet again. Check out this WIRED article for understanding the problem better. Attempts of global wireless network access by Facebook, Google, SpaceX have seen significant concerns from pro-free-speech groups. Surely, giving access to the internet to a billion more people is a good thing; but would access come at the cost of autonomy?

Tech giants are known to employ ethically questionable UI/UX practices backed by psychology research to keep the user “hooked”, to “nudge” the user towards a click on an ad, a scroll for a bit more of the newsfeed, an impulsive buy. Are these not instances of restricting our freedom? Are we living in an illusion of digital freedom? Are we moving towards a future where algorithmic overlords decide almost every aspect of our lives, maximizing it to a corporate/governmental optimization function?

We looked at the tough task of balancing freedom and privacy with security and development, from the point of view of tech consumers. What is your opinion on the issues? Let us know in the comment below!

Next up: What’s up with all these long-drawn court battles between tech giants? We have to understand Intellectual Property before we answer that!

I’ve tried to break the multi-part series down to readable chunks, tackling one or two issues at a time, so that the main message is not diluted. If you have any feedback regarding the article, feel free to reach out to me. If you liked the article and think more people should know about it, pass it to a friend!

This piece was inspired by CSC300: Computers and Society, taken at the University of Toronto during Fall 2019, under Ishtiaque Ahmed. Please note that the opinions in this piece are my own.

--

--