‘No Self-Tracker is an Island’: Wrestling with Social Issues in Personal Informatics

Jasmine Jones
ACM CSCW
Published in
7 min readNov 13, 2018

Imagine you have a friend or a family member who you would like to support in a behavior change, for example losing weight or recovering from substance abuse. If that change comes from a condition with stigma or vulnerability, a person may have an initial reluctance to ask for or accept support from others. They may, for example, be concerned that what was once a reciprocal relationship will be transformed into an asymmetrical “patient-caregiver” relationship. When a person is hesitant to reveal vulnerability by sharing their personal data, but sharing that data could help them receive support, how could technology still support them in this critical period?

At CSCW 2018, a group of 20 researchers gathered to discuss the opportunities of social uses of personal data, as well as technical, social, and ethical challenges of personal data collection, sharing, and reuse.

Organizer bios and participant position papers can be found at http://tmilab.colorado.edu/socialpi/

What is Personal Informatics?

People use technology to help them track the steps they walk, the purchases they make, and the patterns of their menstrual cycle. This field is often called personal informatics (a.k.a quantified self or self-tracking). However, these individualist names are misleading: many of the practices of so-called “personal informatics” are actually highly social. A one-day workshop on Social Issues in Personal Informatics convened faculty, students, and industry researchers to discuss how, to paraphrase John Donne, “No self-tracker is an island.”

Highlights and Takeaways

Participants served on rotating panels, fielding questions about technology platforms, design strategies, and ethical issues for personal informatics research

Throughout the daylong workshop, student and faculty researchers in fields ranging from journalism to data visualization presented a diverse array of applications for personal data and discussed important social issues to consider in this work. Throughout the day, participants engaged in panel Q&A, small group topical discussions, and design activities. At the conclusion, the organizers led participants in a brainstorming session about how to sustain and advance this nascent research community beyond the conference.

Below, we describe a few highlights and takeaways from the day.

Visualizing the Ecosystem of Personal Data

Visualizations in informatics often display personal data (e.g., the steps I walked) in ways that omit the ecosystem that surrounds them (e.g., the weather, your mood, and who joined you on your walk).

Drawing inspiration from historic and modern vizualizations, such as so-called “hobo symbols” popular in the early 20th century and social network graphs, workshop co-organizer Jaime Snyder led participants in sketching novel visualizations of the self, using data from internal sources (e.g., heart rate, motivation) and external sources (e.g., the people around you, institutional stakeholders).

Jaime Snyder, assistant Professor at The University of Washington, demonstrates hobo symbols as a type of social data visualization. Participants sketched their own interpretations of social data ecosystems.

Participants represented this ecosystem for themselves, and for stakeholders in their research, using arrows, lines, circles, and amoeba-like shapes. In contrast to a simplistic bar chart of steps walked, creative visualizations like these can reveal the web of influences on and from personal activity. This sketching and diagramming method has been used to encourage reflections on data in participatory design sessions with researchers, self-trackers, and other stakeholders.

Ethical Considerations for Social (Re)Uses of Personal Data

Solidarity. Referring back to our opening scenario, participants discussed cases when self-tracking is for managing a sensitive health condition. For some people, asking for support through sharing data involves revealing a vulnerability or a weakness to people close to them, or even to a broader audience. To avoid negative impacts and changes in personal relationships due to sharing self-tracking data, Kai Lukoff, PhD student at The University of Washington, suggested tracking in solidarity as a possible approach. Saying “We should track” could be less disruptive to a relationship between peers than saying “You should track.” Such an act of solidarity says “We’re in this together,” and could make people feel comfortable to open up to people who are there to help.

Left: Co-organizer Laura Piña, a UX Researcher at Google, discusses ways to support families and caring friends of people sharing PI data. Right: Workshop participants brainstorm important ethical issues that arise with personal data is shared.

Secondary uses of data. In his position paper, Dong Whi Yoo, PhD student at Georgia Tech, pointed out that some data we generate about our activities (e.g., social media feeds) can be repurposed in ways a user does not consider when they create them. His work focuses on how campus administrators might analyze student’s posts on social media, along with other sources, to track the mental health climate and proactively respond to crises. He asked, how should technologists ask for consent for such secondary uses? Considering this question at a society scale is even more complex: when a person shares their genomic data (for instance, to research their genealogy), it means that law enforcement could potentially requisition such data to identify their relatives who match this genomic data found at a crime scene. How should consent for data use take into consideration potential effects upon third parties?

Social Comparison. When people are trying to reach a goal, they might compete with others or chart their progress in comparison with related users. Many apps seek to automatically suggest people to co-track with to enable this social approach. Clayton Feustel, a PhD student at Georgia Tech, raised the question of how to choose comparison groups for users. His research shows that people have a natural tendency to compare themselves to those similar to them. But what if these similar others have some poor health habits (e.g., high sedentary behavior)? Should a designer instead take show comparisons against users who have ‘healthier’ habits?

Strategies for Social Research

Many of the researchers were re-appropriating existing market devices (such as Fitbits and smart watches) to study and propose new systems. Participants discussed the technology platforms, systems, and devices that they were creating or using in their current personal informatics research. In a panel discussion, they laid out the benefits and drawbacks of re-appropriating existing tools versus building new infrastructure.

The workshop used a combination of reflective design activities and topical discussion to draw out key issues and challenges in the field.

Some of the themes that came up were:

Familiarity. Users tend to trust devices and platforms that are familiar, such as Fitbits, social networking sites, and messaging applications.

Discretion. Many of the workshop participants work with vulnerable populations in their research, such as people managing serious mental illness or recovering from substance abuse. We discussed how wearable sensors and associated personal informatics software must allow for discretion to account for any social stigma that might arise from managing these conditions through self-tracking.

Risk. Many self-tracking apps include sharing features and allow users to post their goals and progress on social networking sites. However, current platforms have insufficient controls for negotiating access to sensitive personal data, and carry risks of accidental oversharing or surveillance.

Relationships. Personal informatics applications can serve different kinds of groups from single household families, to friend groups, to geographically distributed online communities. The choice of devices, systems, platforms, and data policies might change based on scale and relationship structure of the groups involved in self-tracking.

Combining data. We discussed possible technical challenges of combining data from multiple sources into collective informatics applications, especially when systems combine data tracked intentionally by a group of people and data collected from passive sensors, such as air quality monitors. Issues such as consent, awareness, and unintended disclosure arise as the scale and analytics of such data becomes more advanced.

Call to Action

From this workshop, attendees left with a stronger sense of the need and the urgency to further investigate the social aspects of personal informatics systems and self-tracking practices. Even platforms that are not traditionally considered as “tracking,” such as digital media sharing sites, can be sources of important and revelatory information about oneself. We hope this workshop leads to continued focused study on novel social uses of personal informatics applications, as well as work that addressed the tensions that arise in developing and using self-tracking systems in social contexts.

Are you working in a social PI space? Learn more and join us!

Visit: https://tmilab.colorado.edu/socialpi

Join our listserv! Email Stephen Voida at socialpi@lists.colorado.com

This article was written by Jasmine Jones, Kai Lukoff, and Lucy van Kleunen.

--

--