Assumptions in the Design of Self-Tracking Tools

Sean Munson
HCI & Design at UW
Published in
10 min readMay 22, 2017

Today’s interactions with technology leave digital traces that are a potential font of insights into our lives. Our emails contain receipts of our social and business transactions. Our phones can record our locations, estimate our physical activity, and infer how we traveled between different places. Wearables can collect more reliable and more detailed data about physical activity, and researchers — including my colleagues at UW — keep finding ingenious ways to repurpose the sensors in everyday devices to collect an ever-broader range of data.

In 2010, Ian Li, Anind Dey, and Jodi Forlizzi published a paper describing this process of preparing, collecting, integrating, reflecting, and acting on personal data data: the five-stage model of personal informatics. This model has guided many researchers and designers creating tools to help people collect, inspect, and act on personal data such as the digital traces and sensor data described above. Their paper lays out a process including preparing to collect data, collecting it, integrating that data for analysis, reflecting on it, and then acting based on the insights gleaned.

Li, Dey, and Forlizzi’s 5 stage model of personal informatics has helped guide the design and study of self-tracking tools.

There are, however, many assumptions embedded in this model and many of the projects inspired by it.

As personal informatics capabilities become embedded in more products and personal data touches more facets of life, these assumptions merit another look. How broadly do they hold up? Which use cases — and who — do they support, and which do they exclude?

Assumption 1. Action is the Goal.

The personal informatics model ends in action, and indeed, the most prevalent application of personal informatics has been to support behavior change: people who want to eat better, people who want to exercise more, people who want to be more productive, people who want to find a way to save a bit more for retirement.

However, when researchers survey or interview people who use self-tracking tools, we find that action is not the only goal. Yes, some people want actionable insights. Others are just curious — they want to know where they stand relative to others, they want to get a sense their current routine, or they just want to know which artists they listen to the most. Other people track because they want to have a record of their behavior, or because they love having the data.

Still others track because it enables other incentives. People didn’t typically use FourSquare because they want to change their restaurant behavior, they used it participate socially with friends, to earn restaurant discounts, or because they enjoyed the game of earning badges. Other people use Fitbits or other physical activity tracking tools to receive incentives as part of their workplace wellness programs, such as Humana’s Go365 program or Limeade’s programs. Unfortunately, these incentives can create conflicts in goals.

For example, imagine you are a swimmer, but your workplace wellness program only lets you automatically log steps. You might resist and keep swimming, or you might be motivated by the financial incentives (gift cards, a discount on your deductible) and replace swimming with walking — even though you enjoy it less and it may not support your health goals as well as swimming. The wellness program might even see this as a success, since they don’t know you gave up swimming to achieve all those steps you logged.

We have also seen conflicts in goals in our research on menstrual tracking apps. Many application designers assume that people track to either prevent pregnancy or to become pregnant. Even if that covered every goal for someone using a menstrual tracking application — and it doesn’t — it’s mistake to assume that people will always have this goal or that they will want to stop tracking if their goal changes. Because applications often support only one goal, and because the data in personal informatics applications are often stored in application- or platform-specific silos, people frequently have to abandon apps and all of their data when their goals change.

We’re starting to see some good progress here. Google Fit and Apple Health allow people to share — and reuse — personal data between applications, though many companies are loathe to add data their products collect to these personal data libraries even as they pull data from them. Withings also recently updated their weight apps to enable a “pregnancy mode,” allowing them to continue using the same tools and data while reflecting that someone’s goal or status can change.

Assumption 2. People will use Personal Informatics Tools Indefinitely.

In research and commercial products, I see two common patterns for use of personal informatics tools. In the first — closest to the pattern described by Li et al. — people use the tools long enough to gain actionable insights, make some changes, and maybe repeat if they aren’t satisfied with the outcomes. In the second — e.g., Fitbits — people make ongoing use of the tools to monitor their behavior and outcomes, continually fine-tuning their behavior.

In practice, people sporadically use tracking tools. For some tracking approaches, like a food journal, most people find it too burdensome to do day in and day out. They track for a bit, and then the device breaks and they don’t replace it, or they just forget to charge it. Others find regularly attending to their data to be too burdensome. As a result, many people alternate between tracking and lapsing, often with longer periods of abandonment.

To better describe people’s actual tracking practices — and help designers better envision how to support them — we created a model of lived informatics.

The Lived Informatics model describes the often concurrent processes tracking and acting on data alongside collecting, integrating, and reflecting on that data, the common steps of lapsing and resuming tracking, and further breaks out the steps of deciding to track and selecting tools.

The Lived Informatics model points to a range of needs, including for:

  1. Tools to better support people who are resuming from lapsed tracking,
  2. Designs that help people get some benefits in a limited amount of time, or from a lower burden from of tracking, even if those benefits are less than they might get from ongoing, more intensive tracking,
  3. Designs that better support people who have lapsed, and
  4. More research and design to help people find the right tracking tool for them.

Assumption 3. Self-monitoring and self-regulation, maybe with a little social pressure, are enough to support behavior change.

Most personal informatics tools follow a pattern that is something like:

  1. Collect data
  2. Put it on a graph, or maybe a map,
  3. ???
  4. Action

This works fine for people who need to see when just need a little more visibility into their behavior throughout the day, such as someone who just needs to see they haven’t walked as much as they think they have in a busy day.

That isn’t enough, though, for people who don’t know where in their busy day, amid all their other commitments, they can walk more.

Other people have questions that go beyond what in-the-moment feedback can answer. Better awareness of one’s symptom level isn’t enough for someone struggling to understand complex relationships between what they eat, stress, physical activity, and the symptoms they face. For these people, merely showing the data — an approach Matt Kay calls “put ’em on a graph and hope” — often causes more frustration than it does to help. And so this becomes another reason people abandon their personal informatics tools.

Personal informatics needs better tools to help people make sense of their data. This includes tools that help people answer their personal health questions by scaffolding a scientifically valid process of forming hypotheses and conducting self-experiments to test them. Other groups are exploring additional promsing approaches, such as

  • just-in-time, context-specific suggestions, such as MyBehavior, a project to suggest health behaviors of behaviors to engage in or avoid
  • individualized, predictive models, such as Glucoracle, an app that allows people with diabetes to predict the effects of eating a particular food on their blood glucose.

We’re also starting to see some of these ideas in commercial products too, such as Jawbone’s UP Coffee app, which let people explore the effects of caffeine on themselves.

I’m also not ready to write off graphs. While recent work shows mixed literacy for the types of graphs shown in personal informatics applications, my group has found that people can use graphs to analyze their data, if those visualizations are adequately scaffolded and grounded in the individual’s data. We’ve also combined graphs with explanatory captions to communicate a finding while allowing people explore the data behind it.

Assumption 4. More data are better.

When designing new tools, I often feel that increased sensor data and lower-burden journaling tools should help provide users with more value. While that’s often true, adding more data and more resolution also can be counterproductive at times.

Consider food. Many food journaling applications are oriented around calorie tracking. While this may work for some people, the orientation around calories is burdensome and can even nudge people away from healthy eating. Beyond these challenges, a calorie-focus also presents food journalers with an experience quite removed from the first things most people want think of when sitting down for a meal. When you imagine a meal, you probably think of the tastes, the smells, the texture, and maybe the company — not spreadsheets.

To consider an alternative approach, colleagues and I built a food journaling application that presents its users with just one daily challenge, which they can record completing it by simply taking a photo. We have experimented with both nutritionally prescriptive challenges (e.g., “eat something high in fiber”) and non-nutritionally prescriptive challenges (e.g., “eat something that reminds you of your teenage years”).

Rather than requiring users to complete a journal of everything they ate, Food4Thought prompted users to complete daily challenges. We evaluated variations with (a) nutritionally prescriptive and (b) non-nutritionally prescriptive challenges, as well as private and social variations. Food4Thought users could look back at (e) completed challenges and (f) all pictures taken.

In a field experiment, both forms of challenges increased people’s reported food mindfulness. There’s still a lot of work to do to learn if this actually leads to healthier eating in the short- or long-term, but it does suggest a way forward around the “minimum viable data” that can further a goal. Our research on self-experimentation to test the relationships between foods consumed and symptoms similarly reduces the experience to just modifying and tracking one meal a day and resulting symptoms, rather than tracking symptoms and foods all day long.

Assumption 5. Self-tracking is self-tracking.

In everyday life the decisions, behaviors, and outcomes related to the data people track are interconnected and often collaborative. Family members influence what they each eat. A bad night’s sleep can affect the whole family the next day. Managing chronic conditions requires marshaling the resources of family, friends, and other caregivers. Patients struggling to make sense of data turn to their health providers; others turn to retirement planners, dietitians, and personal trainers to answer other questions.

Self-tracking rarely is truly individual, yet when discussing health trackers and other tools, the HCI and health communities often use the terms personal informatics, self-tracking, or self-management. Our use of these terms, and of models focusing on an individual, lead to products that are similarly focused on the individual.

Yes, many fitness tools allow data sharing, but typically in shallow ways that best support competition or other comparisons. Just as tools fall short of providing many people with actionable insights, they also fail to help families better understand how their decisions affect each other or opportunities to adjust their collective behavior.

These challenges go beyond health applications. For example, while couples that make financial decisions jointly report greater satisfaction than couples that make decisions individually, popular financial websites (e.g., Mint, Personal Capital) allow only one login. This design choice nudges families to make one person primarily responsible for reviewing data. For a variety of behaviors and goals, personal informatics data offer families better opportunities to understand each other’s behavior and experiences, but only if tools support effective — and appropriately privacy- and impression-preserving — sharing of data.

People also need better tools for sharing their data with peers and experts. Patients may bring weeks or months of self-tracked data to a clinical visit and ask their doctor to help them make sense of them, reviewing the data only on their mobile phones. Other times, doctors and their patients agree to use paper journals, which are more readily customized to their goals but result in data that is harder to aggregate and understand than digital data.

Where should personal informatics researchers and designers go?

As more personal informatics tools find their way into the wild, we see a rich set of goals, uses, and non-uses. To understand and inspire for this breadth of goals and uses, we need more flexible and more inclusive models for personal informatics and self-tracking, including models that account for changing goals and lapses in use. We should also continue to explore designs that reduce the burden of tracking and analyzing data alongside approaches that help people get more value from the data they do track. Finally, we need models and designs that account for the often-collaborative processes of tracking, understanding, and acting on data between families, caregivers, peers, and experts.

This post covers research with many great colleagues, including doctoral students Christina Chung, Daniel Epstein, Elena Agapie, Arpita Bhattacharya, Jessica Schroeder, and Ravi Karkar, post-doctoral scholar Laura Pina, and faculty Julie Kientz, James Fogarty, Gary Hsieh, Jasmine Zia, Allison Cole, and Roger Vilardaga. This research was funded in part by the National Science Foundation, the Agency for Healthcare Research & Quality, Microsoft, Intel, and the University of Washington, though the views expressed are my own.

--

--

Sean Munson
HCI & Design at UW

Associate Professor of Human Centered Design & Engineering, University of Washington. https://smunson.com