Privacy in the Digital World: Breaches Underscore Need for Federal Action

by Koustubh “K.J.” Bagchi

The word “privacy” has been top of mind lately in the digital space. You have probably noticed your email inbox is filled up with privacy policy changes due to notices about the implementation of the European Union’s General Data Protection Regulation, or GDPR. Another reason is the scrutiny that technology companies are facing after revelations surrounding the Facebook/Cambridge Analytica scandal. There are significant questions to ponder, such as what does privacy mean for us in the digital space, what personal information should always be protected, and what are the key tradeoffs to help us better understand our privacy rights and the benefits we enjoy online every day.

As a starting point, it is important to note that the collection of large amounts of data, in and of itself, is not inherently dangerous. In fact, civil rights advocates have often been at the forefront of leveraging data and statistics to prove the existence of disparities based on race, gender and other protected statuses. The proper collection and analysis of data has the potential to make us healthier, safer and more compassionate. By providing a valuable tool to help advocates better understand the needs and motivations of underserved communities and communities of color, data could help advance the cause of justice in critical ways.

Yet, the potential of data as a tool depends greatly upon user trust, at the individual level, and at the policymaker and stakeholder level. In the aftermath of scandals like Cambridge Analytica, businesses that are powered by big data must work hard to restore our confidence about how our data is treated, and policymakers should ensure that our privacy is protected in a way that keeps vulnerable communities safe.

The dialogue around how digital privacy principles should look is evolving as stakeholders occupying various aspects of this conversation — namely, privacy advocates, consumer rights experts, civil rights organizations, to name a few — have yet to come to a consensus on what are fair and constructive guidelines that all digital-based companies, particularly those involved in the online advertising ecosystem, should abide by.

Given this, we will launch our analysis on the evolving digital privacy space through a series of blog posts on this topic to bring up-to-date analysis around an issue that severely impacts our community that is having a greater influence online.

A major spotlight on how digital-based companies treat our personal information came about when Facebook first admitted in March this year that it allowed an outside firm, Cambridge Analytica, to harvest personal information from 52 million users as part of the firm’s effort to promote Donald Trump’s presidential campaign. By April, Facebook raised that number, saying that as many as 87 million users may have been compromised.

After Facebook CEO Mark Zuckerberg appeared before two Congressional committees, many stakeholders raised privacy concepts worthy of more attention.

Historically, Facebook used the term “informed consent” to justify how it used data it collected from its users. Yet, Cambridge Analytica exposed that this phrase was overbroad and vague. Keep in mind that the firm never hacked personal accounts, but was permitted to gather data from the friends of users who used a third party app on the platform. After scrutiny, Facebook made major changes to its privacy settings.

The concept of “opt-in” protections has been raised, meaning that all digital entities must gain consent to collect and share a user’s personal information for purposes other than the actual service that entity is providing to the consumer. Further, the idea of “transparency” has also been a focus of the discourse surrounding this issue. In this context, transparency calls for online entities to disclose how they are collecting and sharing users’ personal information by providing concise, easy-to-find, understandable privacy notices to users. Protections such as these will provide consumers with greater control over how their data is used and force all digital entities to be more transparent with their plans and intended use of customers’ personal data.

The damaging nature of misusing personal information is not an exaggeration. The link between how personal information can be used to initiate surveillance on ethnic minorities has already been illustrated in our nation’s recent history. Prior to the era when so much of our personal information started living in a digital space, President George W. Bush’s administration initiated the National Security Entry-Exit Registration System (NSEERS) in the aftermath of the September 11, 2001 attacks. The countries of origin that were targeted were all Muslim-majority countries, except for North Korea. As a result, this program was largely seen as a “Muslim registry.”

As CNN describes the program:

“The program had three parts. First, it required non-citizens to register when they entered the US — a process that included fingerprinting, photo taking and interrogation. Second, it mandated that these people, as well as others already in the US, register and regularly check in with immigration officials. Third, it kept track of those leaving the country to make sure that temporary guests did not remain illegally. Violators were arrested, fined and even deported.”

The final entry and exit program applied to both men and women, while the domestic registration program required all males, 16 years of age or older. Thankfully, the domestic-registration portion only lasted about a year while the entry-and-exit portion was suspended in 2011. In one of President Barack Obama’s last acts in office, his administration dismantled the NSEERS program entirely.

For some, it may seem like hyperbole to liken today’s personal information collection practices to what was initiated to track and register visitors from Muslim-majority countries; however, recent government actions tell a different story — and it has minority and other vulnerable populations paying close attention.

For example, the Trump Administration’s “Executive Order Protecting the Nation from Foreign Terrorist Entry into the United States” (aka the Muslim ban) authorized the Immigration and Customs Enforcement’s (ICE’s) Extreme Vetting Initiative (EVI), effectively the digital arm of the Muslim ban. The EVI gave the agency broad, unilateral authority to monitor the social media accounts, blogs, and other internet activity of Americans and foreign nationals and flag individuals for deportation or deny them entry based on the data they collect. After much scrutiny and outcry, the agency pulled the plug on such efforts.

The U.S. State Department’s recent announcement that it intends to collect and review all social media accounts and identities of almost all persons entering the United States only further illustrates the extent to which social media is now being weaponized against immigrant and non-immigrant minority populations.

There is no doubt that Americans need a new federal law that formalizes consumer protections against certain commercial collection and use practices, and government surveillance. Congress should take action, and the outlines of this new law should be clear:

1. The law should cover all digital-based companies, including internet service providers, advertisers, e-commerce sites, entertainment and ad tech companies, social media platforms, search and browser providers, operating systems, data brokers, and everyone else within the online ecosystem.
2. Our privacy rights should be based on a uniform standard of transparency and full disclosure. At a minimum, consumers need more and better control of records containing the websites we visit, our online searches and purchases, and other online actions.
3. As part of this transparency, Congress should guarantee consumers consistent federal standards of access and control over all personal information collected from their online actions. A patchwork of inconsistent state and local regulations in this area will only confuse consumers.
4. Finally, the law must make clear what data collection rules and requirements apply to private-sector businesses and industries, and what apply to local, state, and federal government entities.

These points above are only the beginning of a longer conversation on digital privacy and security. The final outcome must strike a fair balance between strong digital privacy on the one hand and digital investment and innovation on the other. As such, any regulatory scheme should protect and foster an open Internet. Digital platforms have expanded opportunities for underrepresented groups that must not be diminished, especially if it limits direct access between creators, entrepreneurs, or citizens and their target audience.

Despite the headlines raising concerns over the use of personal information, such use to create individualized products and experiences is a practice we can continue to expect from companies that live online. For example, some apps, such as maps, must have access to certain data to function. More importantly, in the realm of e-commerce and telehealth, the use of consumer-generated data helps organizations to thrive and provide better service.

We must remember that our data is valuable and can be used to drive social good for business and society. Yet, meaningful safeguards can minimize the likelihood of misuse of our data for nefarious purposes. A user’s reasonable expectation to be protected from the misapplication of personal information must never be compromised.

This piece is part of an Advancing Justice | AAJC blog series about digital privacy.

KoustubhK.J.” Bagchi is Senior Staff Attorney for Telecommunications, Technology, and Media at Advancing Justice | AAJC