A Review of the CSCW 2018 “Privacy in Context” Workshop

Karla Badillo
ACM CSCW
Published in
8 min readJan 22, 2019

As the networked privacy research community grows, there’s no doubt that the discourse around privacy is important and prominent in and outside of academia. However, the term “privacy” is complex and, therefore, often misunderstood and misused in empirical HCI research. One way to solve the problem of the often fragmented and erratic use of the term “privacy” in HCI is for our community to converge on a subset of core privacy theories and frameworks that can meaningfully inform our scholarly work and provide a common foundation in which to move our field forward. This was the primary goal of our one-day CSCW 2018 “Privacy in Context” workshop. This blog post provides a summary of the event.

What is CSCW?

The ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) is the is the premier venue for research in the design and use of technologies that affect groups, organizations, communities, and networks. This event gathers researchers and practitioners to discuss the latest challenges of technology design research that supports collaborative work.

What is Networked Privacy?

The term “networked privacy” is used by privacy researchers to acknowledge that our online privacy — data, information, and interactions with others — is shared with various online platforms, devices, and people. Therefore, the co-ownership and management of online privacy is networked, so that it is not solely up to an individual to withhold or disclose personal information online.

Workshop Highlights & Takeaways

Taking a closer look at the common theories and frameworks applied within the HCI community, we can see that privacy is approached through different perspectives, such as framing it as a form of interpersonal boundary regulation, where individuals or groups negotiate appropriate boundaries with others or taking on more norm-based approaches to frame privacy as appropriate information sharing (as discussed by our keynote speaker, Helen Nissenbaum). Therefore, we gathered about 20 researchers from a variety of disciplines to theoretically examine the concept of privacy to reach the following end-goals:

  • Understand privacy as a complex and multi-information faceted concept
  • Engage theory in privacy research and design
  • Develop guidelines and heuristics for applying theory
  • Foster new collaborations

In the following sections, we provide an overview of the main activities within the workshop.

Keynote — Helen Nissenbaum, Philosopher by Training

The workshop began with a 30-minute keynote by Professor Helen Nissenbaum, best known for her “Contextual Integrity” privacy framework, as well as a Q&A after the keynote. The highlights of the keynote are below.

Helen Nissenbaum presenting the parameters and values of her Contextual Integrity Framework

The keynote started with a high-level overview, that in the societally concerning sense of privacy, there are observing practices and trying to characterize them (descriptive) and understanding norms (prescriptive). Contextual Integrity (CI) fell into the latter prescriptive category. Helen then started to describe what CI is and is not. There are a few major points as presented below.

  1. The most important of CI is to consider privacy as appropriate information flow. It is not to treat privacy as secrecy, no data collection, no information flow, data minimization or data leakage. Rather, it is about whether the information flow is appropriate in a context. For example, in the privacy by design community, people talk of privacy as data minimization, but if you really look at their work, the flows are being reshaped and redistributed in a different way. They are using different terminology to do the same thing (re-defining appropriate information flow). Appropriate flow refers to the flow conforms with entrenched contextual informational norms. Here norms mean prescribed behaviors (e.g., sharing information with a doc in a hospital). However, it is NOT to say the privacy is respected if you get people’s consent to share information. Fundamentally, if the information flow is not appropriated, privacy is considered breached. Appropriate flow also refers flow conforms with legitimate contextual information norms because entrenched norms might be bad, for example, slavery.
  2. Related to the previous point was the explanation of contextual information norms. Contextual informational {privacy} norms refer to five independent parameters: <actors: subject, sender, recipient>, <information type>, <transmission principle>. The norms do not refer to subject control, notice and consent, public/private/sensitive classifications, or general (vs. contextual) norms. In this definition, Actors embodies context such as finance, healthcare, education, etc. Information types refer to things such as: demographic, biographical, transactional, what you’ve read, SSN, medical diagnosis, facial image, spoons of sugar in your coffee, how much you paid for your house, etc. Transmission principles refer to things such as: consent, coerce, compel, steal, buy, sell, in confidence, surreptitiously, with notice, reciprocal, and as required by law. For example, you put on a shirt at store and like it and walk away. The principle is that you need to pay before you walk out of the store. Another example is that, when searching someone’s house, a police officer needs to have a warrant, otherwise it’s wrong.
  3. A question was raised here: what about situations where there is not an accepted norm yet? Can CI help there? Can it be a proactive tool for helping to define or identify norms? Research is needed to specify norms in such situations. One can use CI norm structure to map information flows: 1) to capture information practices a diagnostic tool; 2) to communicate information practices (meaningfully); 3) to learn about people’s privacy expectations; 4) to detect discrepancies (compare practices actual flows with expected flows); 5) to define meaningful transparency.
  4. Helen is adding the sixth principle of CI: establishing legitimacy of contextual norms. These could include: 1) interest and preferences of affected parties (individual); 2) ethical and political principles and values (societal); 3) contextual functions, purposes, and societal and domain values. These values will have different meanings for different domains. For example, in healthcare, the values can be: cure disease, alleviate pain and suffering, and equity; in politics, the values can be democracy, autonomy, accountability, and justice; in home and society, the values can be trust, autonomy, and stability; in education, the values can be knowledge and justice. The legitimacy of contextual norms can come in the format of ethics, policy, or law regulation. One related question is that, how individual differences affect perceived norms? From the community perspective, researchers who do this research provides very important insights into the hands of legal teams that helps judges know that something that violates expectations. Future research is needed here.

At last, Helen presented a few studies she has been doing. The first study is about analyzing privacy policies to see if they mention all 5 aforementioned parameters, usually not. The second study is about information flow that maps to identify leaks and violation. Another project is from PEW research which tried to reduce privacy to information type (sensitive information questions. religion, location, etc.). However, as soon as the participant parameter is added, people’s attitude changed drastically. The final project is by Helen Nissenbaum and Heather Patterson on biosensing in context health privacy which provides a prescriptive recommendation at the end on whether should post health information online.

The keynote ended with a question and answer segment, followed by a group discussion. Below are the highlights.

Q: People may have very different norms, so what do you do?

A: Try to see if there is a norm first. Then try to embody that in your design as default. There may not be a consensus and so then you may support multiple options. Give choice. In a situation where there are no norms, map out the flows of where it is going, and then decide where it needs to go.

Q: If design a norm into the system, is that a good idea?

A: Absence of norm could indicate it’s a matter of choice and no right answer or severe ignorance. Protecting privacy promotes societal ends — we give people privacy in democratic elections.

Q: CI is not predictive?

A: It’s the discovery of the norm is what CI does, might be an implicit norm. With established actors like a physician, can draw on previous norms. Now there are new actors like Google or telecommunication platforms. If we don’t jump in now, we are going to lose out.

Discussion point: moving towards not just reasonable expectation or norm but moving to the standard norm. Law scholars are split — Schilia wrote the majority opinion, thermal imaging has not entered the mainstream (Kilo — use thermal patterns on the wall, the decision is that they needed a warrant to use it) (riley is the garbage case). Once thermal imaging becomes normalized, is it ok? Some law folks say yes, it is normal like flights you can see someone’s patio. Others say no, it’s not. Having evidence that this would really surprise a lot of people if you did it a certain way. But other judges who go more prescriptive.

How Can We Engage Our Research in HCI Privacy Theories?

Later in the day, participants formed small groups based on their area of research and privacy theory/framework of choice. Each group developed a potential research study and collaborated on identifying how the theory/framework be incorporated to address their research problem. Three groups emerged:

Workshop participants brainstorming and presenting their potential theory-based research projects
  1. Contextual Integrity + Emerging Technology + Norms: This group was interested in addressing the pervasiveness of passive information collection and use. They discussed theories such as privacy calculus (benefits/risks) and designed their project as a scenario-based participatory design study. Their end goals were a critical critique theory paper, user study publication, and a prototype evaluation.
  2. Education + Privacy + Literacy: This group was interested in studying privacy literacy on YouTube for children. They framed their research project using the contextual integrity framework addressing the problem with an interview study.
  3. Context + Cultural/Vulnerable Populations: This group was interested in studying the individual; specifically, how theories address individuals and how theories are applied with vulnerable populations. This group talked about many different theories like stigma theory, control theory, intersectionality, as well as the intersection of critical theory and privacy theory.

Future Directions

The workshop concluded with the hope that privacy researchers and professionals will continue to gain enthusiasm and excitement towards bridging theory and practice in privacy and design research. The attendees noted areas in which we can further the field of networked privacy such as:

  • Maintaining a centralized website with information on future privacy workshops, privacy researchers’ bios/contact information, information on the latest privacy research, and more.
  • How to make privacy principles practical for applying theories to make it predictive?
  • Finding ways to integrate theories into real, marketable products.
  • Translation work between CS, HCI, practitioners, government, to make it practical.

Are you interested in Networked Privacy Research? Join our network!

This article was written by Karla Badillo-Urquiola and Yaxing Yao on behalf of all the workshop organizers.

--

--

Karla Badillo
ACM CSCW
Writer for

Karla is a Modeling & Simulation PhD student, specializing in Human-Computer Interaction, at the University of Central Florida.