Brett Kavanaugh: Let’s talk numbers.

There is exponentially more information created and collected today than ever before. How is this trend impacting us as individuals, US citizens, and our government? More importantly, what can we do about it?
Let’s discuss.

What’s going on?

As individuals more heavily rely on digital tools for communication, entertainment, learning, and monitoring, more data is being produced about us every day. Would you believe that over 2.5 quintillion bytes of data are created globally every single day? By 2020, it’s estimated that one individual will create 1.7MB of data every second. We can thank the increased use of Internet of Things (IoT), access to and dependence on technology, and advancements in capture formats/frequency/quality for much of this measure. This technology can create new experiences in the name of efficiency, such as Amazon’s cashier-less stores, security, as illustrated by China’s efforts to create a wide-scale surveillance system, and more. While an entire article could be written solely about the incredible nature of this increase, I want to specifically explore some of its social implications.

https://www.good.is/infographics/the-world-of-data-we-re-creating-on-the-internet

In the case of Mr. Kavanaugh, I’m not surprised in the least that a 12-year member of the U.S. Court of Appeals for the D.C. Circuit and active author of published opinions has generated a lot of information to sift through. As we move forward, each sequential nominee (or anyone, really) will need to share a larger volume of information to convey a full picture of who they are.

However, if we quantify the amount of publicly available information on the previous nominees against the total information collected about Kavanaugh, something very interesting becomes quite apparent; While senators reviewed about 182,000 pages of documents on Gorsuch and about 170,000 pages on Kagan in the past, they’ve been digging into a record 1 million-plus pages of legal opinions and emails from Brett Kavanaugh’s career as a federal judge. Because so much more information has been collected on Kavanaugh than his predecessors, not only will more time be required to collect and review but there is a higher probability that important data will be missed amidst the trivial minutia inherent in raw data. One of Kavanaugh’s serendipitous streaks of luck, given the new allegations of sexual assault against him, is that he didn’t grow up in a time where smartphones, live streaming, and cameras were everywhere…capable of recording everything. Today’s actions are much more likely to be recorded and remembered- providing solid evidence and a constant, attainable memory of past behavior.

As more information becomes accessible, it’s reasonable to assume we can better understand an individual. However, with loads of new data, it also becomes easier to distort an individual’s identity based on selected, cherry-picked information…furthering the suggestion that a larger percentage, not count, of data is necessary to fully understand an individual.

Consider the Indian parable about blind men and the elephant. A group of blind men, with no knowledge of an elephant, learn about the animal by touching it. Each man feels a different part of its body, such as its tail, ear, or tusk. Because each has a limited experience and understanding of the elephant, all are in disagreement about the elephant’s qualities. Had this animal been small enough to fit into the hand of each man, I imagine there would have been more affinity within the group. In short, our digital identity through the amalgamation of personal data is the ever-growing elephant.

https://en.wikipedia.org/wiki/Blind_men_and_an_elephant#/media/File:Blind_men_and_elephant.png

Why should we care?

Does this temptation suggest that elected officials and leaders should be held to some higher standard of data collection and transparency? On the extreme, representatives could be tracked by-the-minute to capture who they’re meeting with, what was said, how much money changed hands, and what that money influenced. This data could incite new discoveries and legislation around how our representatives spend their time. Ultimately, could clarity could help create a more democratic system? We live in unprecedented times: more Americans are opposed to Kavanaugh’s confirmation (40%) than in favor (31%), despite the fact that nearly half (45%) believe the nomination will eventually be confirmed. This suggests that our current system does not represent the will of the people. Using China’s social credit model, could we reimagine a democracy in which elected officials are incentivized to better represent their constituents over big donors through surveillance and social credit? It’s worth noting that backroom deals, fundraising, and horse-trading make up a considerable amount of Washington’s backbone. Would our government even be able to function with this level of scrutiny?

By questioning the required number of documents to fully assess Mr. Kavanaugh, we must also address the role of personal data, surveillance, and growing digital footprints in our understanding of others. I would argue that the review committee must have access to a larger percentage of all available information to more fully assess a candidate’s appropriateness for a lifetime appointment; if a large majority of a candidate’s history is redacted, perhaps not enough information is available to make the ruling altogether. Then again, how would this same approach affect our own job assessment? While you may not be in a role today to affect national policy through a lifetime appointment, products and services you create today could still introduce a lifetime of change.

What can we do?

Context:

  • Why is user data being requested?
    Is there a clear need for it in order to complete a service? If the need is not clear, is it explained through the experience of using a product? For example, after Facebook Messenger found itself in the spotlight for potential privacy breaches, the company added more clarity around why certain permissions were being requested. From the product side, designers should question, and define clear rationale for collecting every piece of requested user data. At Interaction16, I was delighted to hear that the UK Government had set new guidelines about requesting information about gender, explicitly stating: “You should only ask users about gender or sex if you genuinely can’t deliver your service without this information.” How much needless information is collected simply because it might be helpful one day, despite the lack of plans to define and realize that value?
https://www.makeuseof.com/tag/bad-facebook-messenger-permissions-anyway/
  • Will user data be disseminated? If so, how?
    This one is a little more tricky; often times, a description about how data is used and shared is laid out in a disclosure notice users probably clicked through without reading. Privacy disclosure statements are available but often located at the bottom of websites and with settings or profile settings within apps. Within any privacy policy, Nate Cardozo from the Electronic Frontier Foundation and Joseph Jerome from the Center for Democracy and Technology suggest looking for specific terminology to understand the limits of your privacy and how data is actually used. I also search for words like “third party”, “privacy”, and “share”. In addition to how an organization might share, sell, or otherwise distribute data to other companies, also consider how this information might be displayed publicly. For designers, there is a huge opportunity to improve communications around how information will be shared and disseminated. Cynically, I wonder if some businesses hide this information in legal jargon purposefully, so as not to alarm their users about how widely (and fully) their company is distributing user data. In this case, human-centered designers may be one of the few disciplines within a company that has the ability and standing to advocate for the user during product development. You know the line- with great power, comes great responsibility. In this case, I ask fellow designers: Do we have a social responsibility to fight for more clarity and legibility within legal privacy policies?
  • How would a data breach affect users and the company?
    Depending on what data has been collected and stored, data breaches could be bad or worse. If user information has been anonymized, protected, and compartmentalized, a small breach could be relatively contained and have little impact on company operations and its users. However, with the number and size of recent breaches- from Yahoo to Target, TMobile, and more, it’s likely safe to assume the worst when designing for a possible breach. In addition to designing a product technically to withstand hacking attempts, multi-disciplinary teams should also be designing plans to address inevitable security breaches.

Format:

  • How secure is collected data?
    Is important information encrypted? How does the company ensure transmitted data reaches the right user (and ONLY the right user)? Is personal data accessible externally in any capacity? What measures have been put in place to protect against a malicious employee or devious hacker? Are protective measures strong enough relative to the value (social, financial, or otherwise) of the stored data? A company which users have entrusted medical, deeply personal, or otherwise private information stands to experience a much greater fallout, which in term prompts the need for more security to store such data. As the FTC reported, Experian’s breach affected 143 million Americans, putting deeply important identifying information in the hands of thieves.
  • How is information stored?
    Is the storage method or unit incompatible with other systems? Does that incompatibility increase the security of collected information? Does it create roadblocks for future development and integration with other tools? Has an internal audit been conducted to confirm the storage location and practices of your company? A while back, a friend told me about an incident in which a developer had decided to use an open cloud service to store a list of hundreds of important account numbers. Thankfully someone found out about this practice internally and corrected it before a major incident. Not only was that document potentially accessible to everyone within the company, but it may also have been accessible externally.
http://toothpastefordinner.com/081208

Recipient

Who’s asking? In the case of healthcare, the difference between a health insurance company, private industry, and the government gaining access to user health records makes a huge difference, right? Consider 23andMe, the company itself is definitely selling de-identified, aggregate data for research to third-party companies, research institutions, and nonprofits if you give them consent. They are not (yet) using genetic information to promote products like drugs or hair-loss products that your DNA suggests you might need. The user’s net expense for giving consent to share their data is relatively low. However, imagine if your health insurance company also got a copy of this data and noticed that you had a higher-than-average risk for a pricey medical condition, which then incentivized them to find a reason to drop you from the plan. Recognizing that the data is identifiable in this scenario, the impact of your insurance coverage stands to be a much greater risk to your personal health and wellbeing, even though the disseminated information itself hasn’t changed.

As discussed in one of my previous articles, The Genetic Information Nondiscrimination Act (GINA) protects Americans against discrimination by their employers or insurance companies based on genetic information in most situations. However, GINA does not extend to genetic information-based discrimination in life, long-term care, or disability insurance providers. So, Americans are protected. Sort of. In short, understanding the intentions of the entity consuming your data makes a huge difference. Consider the following questions to get a better sense of the who’s collecting user data:

  • What makes user data of interest to others?
    Consider your current or anticipated future identity, especially in terms of employment and notoriety. Think about your actions. If the IRS comes knocking on your door for additional information, it’s not likely they’re doing so blindly. In the case of Mr. Kavanaugh, his nomination to the Supreme Court initiated the sudden demand for more information. Are you being considered for a new role which triggered a search into your employment history, legal history, and more? Do you trust that laws protect you against the prying eyes of a potential employer? Think again. A publication from the IADC shared the following hollowing information:

Nontraditional employment data comes from sources other than the typical personnel data setting, such as “operations and financial data systems maintained by the employer, public records, social media activity logs, sensors, geographic systems, internet browsing history, consumer data-tracking systems, mobile devices, and communications metadata systems. Employers may collect this information internally or may purchase it through data brokers. When combined with traditional employment data like performance reviews, employee longevity, attendance, absenteeism, and salaries, patterns emerge which can then be used to create predictive profiles. Employers can then use these profiles to predict outcomes for job candidates and employees with similar profiles and can deploy these insights in nearly every aspect of the human resources life cycle, including recruitment, hiring, promotion, compensation, and benefit management.

  • How does this organization plan to use user data?
    Is the requesting company asking for my data in order to provide a service a user is expecting? What about ancillary services a user is unaware of? Are is the organization selling data as an alternative/additional revenue source? Who might they sell to…are there any limits? Are there any other ways user data is being employed? As designers, are there ways to clarify these intentions and constraints (if any) to users?
  • What do users lose and/or gain from sharing their information with this entity?
    What are the potential implications for users toward different organizations accessing disseminated information? This isn’t an exercise I’ve seen widely utilized within the practice of product design as a whole because it requires speculation beyond the limits of the company they’re designing for. Do legal representatives of any given company consider the implication and company liability of disseminated/sold information? Commerce is always about an exchange of value, whether it’s giving three goats for a bride or $699 for the new iPhone. Users are wising up to the fact that “free” products are very rarely ever free, instead, users are giving up personal data in order to use the service they want. However, if the value of that data isn’t known to one of the parties involved in the trade, is it a fair exchange? Maybe there’s an argument to be made that an exchange in which one party is unaware of the value of their trade simply makes that merchant an unprepared, and therefore poor, businessperson. That said, how are users responsible for understanding the value of their information when they aren’t involved in the market of buying and selling personal data?
  • What protections does an organization bake in for its users?
    Does the company have a history of defending or caving to demands for user data? Take Google: This company has received over 48k of requests in the last year, pertaining to over 87k user accounts. However, Apple, likely in an attempt to disclose less user information to requesting parties, Apple collects less user data than other large tech companies and also scrambles what is collected so it’s not identifiable to any one user. Apple products also store more data on the device itself, as opposed to Apple servers. This local data is encrypted only accessible via passcode or biometric data. Interestingly, unlike Google and Facebook, Apple doesn’t offer a quick link to download stored data, making it difficult to see what exactly what personal data Apple has held onto. Still, these efforts are still promising compared to the telecom industry is willing to give up on you.

In summary…

As human-centered designers, we need to strongly advocate for transparency, comprehensive disclosures, and limited data collection on behalf of our users. We need to ask tough questions within our organizations about the necessity of every collected piece of data, its format, storage, and the intended audience for that data. We need to design services that give autonomy back to users, so they can make clear judgments about whether the service they’re requesting is worth what they’re sharing in return.

As citizens, we must consider if constant surveillance has any place in society: can surveillance on public servants promote a more representative and honest democracy? More presently, we need to decide if we have enough publicly available information to place someone in a lifetime position who could impact our ability to freely consider and pursue these decisions later.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store