Designing Canopy’s Tonic Beta App

How we went from Zero to Beta using a research-led product design process

Christin Roman
Type/Code
Published in
17 min readSep 20, 2019

--

When Canopy approached us to design their first product — launched publicly this week as Tonic — the app was only loosely conceived of. At that point, they had a running start on the technology, but they needed to get a Beta program off the ground to begin to use and test that technology, and there were few existing precedents to look to for what the user experience should be.

Canopy was founded by Brian Whitman and a cohort of ex-Spotify employees dedicated to the goal of delivering personalized content recommendations without actually storing any of their users’ personal data. Using a technique called differential privacy (think of those bug reports that get sent to Apple when your computer crashes — it doesn’t matter that it was your computer that crashed, just that somebody’s computer crashed while doing that thing you did), they can interpret users’ actions without the need to learn their identity. The machine learning that deciphers each individual’s interests lives on the user’s iPhone, so any raw data the app collects about the user’s activity is stored locally, not sent to a server. The only thing that is sent to the server is a little bit of information called a taste vector — a set of coordinates that the recommendation engine then uses to determine other things the user might find interesting.

They knew how the technology would work on the back-end but weren’t sure yet how that should express itself as an end-product, and were mostly operating based on their own assumptions and preferences. They needed to validate their ideas and test those assumptions by learning a little more about people’s behaviors using algorithmically recommended content and their attitudes about how their personal data is used to inform those recommendations. Operating as an extension of their product team, Type/Code led both the research that would result in the first concept of the Tonic app, and the product design process that would see Canopy through several iterations and a successful Beta program before going on to a public launch. To shed some light on how this process worked for us, I kept a diary of what we did and what we learned along the way.

Week 1: Discovery

What We Did

The first week was all about learning what we could from the Canopy team. As with all of our projects, we started with a discovery kick-off meeting where we could ask lots of questions, get the Type/Code team up to speed about what had already been learned or accomplished, discuss ideas, and come to a consensus on what we still needed to know. We used a few tools to aid this discussion:

We looked at how people get their content recommendations now as a way to imagine how Canopy would compare against the competitive landscape.

Competitive Matrix: Together, we examined the competitive landscape by plotting existing content recommendation products and services onto a matrix of key attributes. This helped us talk about Canopy in relative terms, to evaluate the user experience of existing products, and identify how Canopy would be similar or different. (It would also prove to be helpful later as we interviewed people about their experiences using particular apps and services.)

An empathy map is a way to discuss and air assumptions about the target audience before research begins.

Empathy Mapping: Together, we filled out an empathy map, positing assumptions about what our target user might think, feel, or do. This was just the “skeleton” of the kind of person who would use Canopy. We would still need to talk to real people and gather real examples of thoughts, feelings, and behaviors, but this provided a chance to get all of our assumptions out in the open about who we thought our target user would be.

My own personal problem statement around content recommendations.

Problem Statements: Finally, we constructed a few possible problem statements to define what the app would help our users achieve. We knew that Canopy as a company was trying to solve the capital “P” problem of data privacy, and that people would be interested in this, but that wasn’t enough. The app itself needed its own value proposition. We brainstormed ways that the app could provide something useful or different within the current content recommendation landscape. These would be our “hypotheses” to prove or disprove as we began our research.

What We Learned

So to design it, we would need to learn more about how people currently get and use content recommendations, what they like or dislike about the recommendations they receive, and how they feel about the personal data that is used to power those algorithmically generated recommendations.

Week 2: Research Plan

What We Did

Before we got started, we articulated the purpose and methodology of our research:

  • What were we trying to do/learn?
  • Who did we need to talk to?
  • What were we going to ask them?
  • What were our hypotheses that we needed to prove or disprove?

I’ve found that even the smallest research projects benefit from having a few clearly articulated research goals. Ours were to:

  1. Validate the current product concept;
  2. Learn more about our target user;
  3. Articulate the problem statement which our product will solve.

What We Learned

Although we had a rough demographic as a starting place, we figured out some ways that we could screen our interviewees to get the most useful information possible. We needed to talk to people who were considered good sources of recommendations by their peers, and we needed to quickly suss out what each person’s primary content type was so we could focus on that for the interview. (Think of that friend of yours who always gives you great book recommendations, or that other friend who is always talking about some interesting thing they learned in a podcast — that’s who we needed to talk to.)

Week 3: Interviews

What We Did

Now we were ready to conduct the interviews. We had a rough script to keep the interviews on track and consistent, even though our interviewees would differ by the types of content they consumed. I conducted interviews with 8 people from 4 different cities within a 20-year age range, and asked them all about the content that they consumed, how and where they got their recommendations, how they felt about those recommendations, and how they felt about their personal data being used to power those recommendations.

What We Learned

Going into the interviews, I was concerned that the script felt clunky, and that I would need to find some crafty way to segue from the topic of content recommendations to the topic of data privacy, but I was surprised at how easy it was. I don’t recall a single interview where the conversation about content recommendations didn’t naturally veer onto the topic of data privacy. Even when talking to people who didn’t work in tech, I was never the first person to use the word “algorithm,” and I never once had to explain how one works. Everybody already understood that an algorithm is the thing that tells you what to read/watch/listen to, and that it operates by making assumptions about you based on the data that you feed it. Some of that data is innocuous and come by honestly, but some of it is too scary to think about. The trade-off is established and understood, even if people are starting to question whether the sacrifice is worth it.

Week 4: Synthesis

What We Did

I’ll be honest, synthesizing research is a messy and imperfect process. Suffice to say, this was the week that I sorted through everything I’d heard and tried to make sense of it all, using a combination of sticky notes and my go-to Trello board.

Because it wouldn’t be a post about UX without a picture of some sticky notes.

What We Learned

What we learned by talking to people is that those who are “concerned” about data privacy (I called these our “hand-wringers”) and people who were “apathetic“ (the ones who would say things that were deeply cynical or use phrases like “that horse has already left the barn”) were actually doing the exact same things when it came to protecting their data privacy.

Yes, you read that right. Two people who have very different feelings about data privacy were actually exhibiting the same behavior. This was important, because it challenged our original notion that this would (or should) be an app that only “concerned” people would use, and forced us to broaden our ideas around who our target audience could be.

Week 5: Insights

What We Did

We shared this and other insights that we’d found, which centered mostly around:

  • Habits: How and when people consume content
  • Recommendations: How they get them, how they feel about them
  • Algorithms: How aware/knowledgeable they are about them, how they feel about them
  • Data privacy: How they feel about it, actions they take
  • Social media: Usage, feelings

What We Learned:

We gained some valuable insights into the feelings and behaviors of our target audience, and it was clear that data privacy was an area of concern for a lot of people, but we realized that we needed to get a better definition of data privacy. What if two people who were talking about data privacy were actually talking about different things? When someone said they wanted to protect their personal data, what exactly did that mean? We also felt that talking to a larger data set couldn’t hurt, so a survey seemed like the right tool at this point to gather more quantitative information from a larger group of people.

Week 6: Surveys

What We Did

We put out a survey on data privacy and blasted our networks with it. (We didn’t get too knit-picky about who filled out the survey, as we knew our social networks would likely skew in the direction of our target user.)

We used a survey to gather more qualitative data from a larger group of people around their feelings and behaviors about data privacy.

What We Learned

From a tactical perspective, the important thing we gained was a better understanding of which data users consider to be personal or private, so that we could start with a benchmark of what people are comfortable with. But, besides a few things at the top of the “that’s not cool” list (for example, selling or sharing users’ data with third parties), most people don’t really seem to have a concrete idea of what data privacy looks like. And while they’d like to take more action to protect themselves, they admit that they don’t know what those actions are or whether they would even have any impact. There is just no precedent for what good data hygiene looks like, and no measurable results.

So while this survey gave us some useful data to draw upon, it also reinforced what we already knew — that even people who want to protect their data don’t know exactly what they are looking for, and that we had an opportunity to create a new model for what data privacy could look like.

Week 7: Ideation

What We Did

At this point, we were chomping at the bit to start designing. There may not be a great precedent to point to that made people feel like they understood and were in control of their personal data, but we had some ideas about how to create one. Plus we were armed with information about not just who our target user was, but what their habits and preferences are in regards to content recommendations, and what they are thinking and feeling about recommendation algorithms and data privacy.

So I summarized our most actionable takeaways from the research, presented it to a few colleagues, and invited them to join an ideation exercise. I posted up salient quotes and insights from our research for inspiration as we took turns focusing on each of the four main pieces of functionality of the app (signup, onboarding, recommendations, and feedback/control). Everyone got a sharpie and a stack of paper and 10 minutes to sketch out 10 ideas. Then we presented them to each other and discussed and iterated on them.

We generated a TON of ideas. Some of them were great. Some of them were silly. Some of them were probably not technically feasible. But they were all useful for helping us begin to envision a whole world of possibilities and to avoid just jumping onto the first idea that popped up. (This is one of several reasons why gathering a diverse crowd of people — including non-designers and people who may not be intimately involved in the project — is beneficial.)

We used a dot voting method for sharing and narrowing down on our ideas.

Now how to decide which of these ideas were worth pursuing? After consolidating similar themes and re-sketching them, we did a voting exercise with the whole studio. The ideas were grouped by functionality, each with its own clearly articulated priorities. Based on this, we voted on the ideas that we thought best achieved these goals. The voting was done silently, but the results were discussed vigorously, and we zeroed in on 3–4 ideas for each part of the app that we felt were worth presenting to the Canopy team as a starting place.

What We Learned

This was our first realization that one of our core tenants of the app was already starting to prove problematic. We knew we wanted to create something that was transparent, maybe even educational about the data that it collected, and gave users as much control over that data as possible. But it seemed that even when transparency and control are meant to serve the user’s best interests, people tend to balk at their data when they see it. This idea had been foreshadowed in my interviews, where the people I talked to seemed to get more uncomfortable the more we talked about the types of data that the apps they use are collecting about them, but already we were starting to see it play out in how that would affect our design.

So even though we had a lot of cool ideas that took the notion of data collection and turned it on its head, the overwhelming consensus seemed to be that leading too much with the data privacy angle was actually off-putting, rather than reassuring to people. It was fun, maybe even a bit cathartic, to explore these options, but we were finding that transparency could come with a price, even when our goal was to put the user in control of their data. We decided to shelve a lot of the ideas that were too “data-forward,” and gave transparency a backseat to the other aspects of the app.

Week 8: Synthesis

What We Did

Even after sorting and consolidating and voting and narrowing down our ideas we were still left with…just a lot of ideas. We needed a way to help us zero in further on what we should actually implement. Or, more immediately, what we should test on users.

Using post-its to map out a quick journey of the typical user experience, identifying the pain points where our app could improve or differentiate.

Throwing together a quick user journey helped us to re-articulate the app’s value proposition and to zero in on the ideas that best exemplified how it would be different from the existing user experience. (This wasn’t anything fancy — just a handful of post-it notes that I threw up on the wall one day and then talked through with Canopy’s product manager to get her input.)

What We Learned

Even though the user experience of the app hadn’t been designed yet, there was still a lot we knew about how the app would work based on the underlying technology, and a few differentiators between Canopy and the typical content recommendation service. These were the distinguishing characteristics of the app that we felt we should focus on and demonstrate through the design. We had a handful of ideas we’d sketched that accomplished these things, and we narrowed it down to the three that we liked best. Now these ideas were no longer just sketches on a piece of paper, but our hypotheses to test.

Week 9: Concept Design & User Testing

What We Did

So that’s what we did! Taking the three ideas that we felt best represented our value proposition, I sketched out a stack of screens for (almost) the entire app, and created a test script that would take the user through these screens in a methodical, task-oriented way, so that we could see and hear people’s reactions.

This was our first “concept” — a clear and cohesive direction that communicates what the app does and demonstrates its value proposition, without going into too much gritty detail. Then, we tested the concept out on five people, a couple of whom I had interviewed previously and the rest who we felt were a good fit with our target audience.

What We Learned

Again, we found ourselves battling the preconceived notions of how content recommendations work, and butting up against the realization that transparency alone is not a reassuring proposition. At this point we found that, even after making the decision not to lead too much with transparency, the thing that we were still being decidedly transparent about — the thing that we thought would most prove our dedication to privacy and be a key differentiator from the status quo — still wasn’t resonating with people in the way that we’d hoped. In fact, the arguments that we were making to the user were actually just inviting more skepticism, and making people distrustful that the app worked in the way that we said it did.

Weeks 10–11: Insights and Final Concept

What We Did

We shared the findings from our user testing with Canopy.

What We Learned

At this point we felt we were at a crossroads.One of the defining characteristics of the app which we had thought would be the killer feature was turning out to be not so killer after all. We could either double down on it and hope that after the initial shock of unfamiliarity, people would come around, or we could focus our attention more on content and brand (the two things, as we’d learned in our interviews, that will always trump data privacy concerns, even among the “hand-wringers”), allowing those aspects of the app to do more of the work of communicating Canopy’s dedication to privacy and prove how our app is different. We would still go on to tackle the issues of transparency and control, but it was clear that we needed to find another angle that resonated more with our users, so we decided to go the latter route and focus on the features we felt were essential for a private beta launch for now.

Weeks 12–20: Wireframing and UI Design

What We Did

Now we were really well positioned to start wireframing, with little-to-none of that pesky uncertainty that we were grappling with at the onset of this project. We used our sketches as a framework, but now, working in a digital medium and at medium-fidelity, we were able to start digging into the interaction design, thinking more deeply about things like navigation, functionality, and content.

The more tactical work of wireframing begins — defining and documenting every screen and interaction.

From here the path forward was much clearer, as conversations became less about WHAT we should design or WHY we should design it, and more about HOW we should design it. We leaned on our developers more for ideas on implementation, and advice on technical feasibility. We started to adapt the Canopy branding to give the app a unique look-and-feel. We tried out some minor variations (should we use icons or text? Should we display the source name or the artist’s name?) and explored illustrations and animations that would help support the brand and the functionality. Our decisions became much less strategic and much more tactical. We further defined and refined our concept as we designed in higher fidelity and prepared to launch the app to a group of private Beta users.

Designing the interactions of the app in higher fidelity

But we also continued to refer back to the things we’d learned from our user testing, and even the survey and the interviews that we’d done all the way back in week three. A lot, actually. As the Canopy team grew larger and new voices were added to the discussion we were able to articulate why we’d made the decisions we’d made, and our concept remained the framework upon which more detailed decisions could be either be made now as part of an MVP, or slated for later when we had more information. And at this point, while the engineering team was still building out the mechanics of the recommendation engine, they were able to adapt the technology to work differently than it had been previously imagined, allowing us to side-step some of the technical aspects of our original concept that had proven to be problematic through user testing.

What We Learned

You can’t skip the part where you are learning about the problem and go straight to the part where you are designing the solution. Or rather, you can, but you do so at the risk of simply operating on your own assumptions. (This is something that we already knew, of course, but every once in a while it’s nice to work on a project where everyone is willing to go on that journey together.)

Beyond Beta: Iteratively Designing, Prototyping, and Testing

What We Did

After the Beta app launched in late 2018, we continued to stay engaged with Canopy for the next four months as their product team grew and their product cycle became more rigorous. Our process was still the same — we created concepts, refined designs, prototyped, tested, and iterated over several more launches — but the cadence changed and the process became more metrics driven as Canopy began to gear up for their public launch as Tonic.

As feedback from the Beta testers began to roll in, it informed some new features, but it also reinvigorated discussions about old ones that had been previously explored but not made it past the concept stage. An onboarding feature which we had nixed for the first launch was easily picked up again, iterated on, tested, and launched within a couple of weeks.

Onboarding Prototype

And that pesky transparency issue that we had grappled with so early on became much clearer once we were more informed by feedback from our private beta group. We were able to play with some concepts that both offered users more feedback and control than they were used to having, and took their concerns about privacy into consideration.

Feedback and Control Concept Explorations

What We Learned

This is where the time spent earlier to explore the problem space and generate ideas really paid off. When we needed to add a new feature, we already had a shared understanding about what that feature could look like, because it had already been sketched out a dozen different ways several weeks ago. Although now we were armed not only with the research that had led us to those solutions the first time around, but also with more pointed feedback from our private beta group, which helped to focus our attention on the right issues and avoid some of our previous pitfalls.

Canopy was proposing to do things in a way that people aren’t used to them being done. Understanding more about our potential users helped us to avoid some fatal mistakes from the get-go, and continuing to gather feedback and explore concepts after launch helped Canopy to find the right product market-fit and go on to a successful public launch. Go check out Tonic and see the final results!

--

--