I want to share seven instances when I have experimented with blending Design Research and Data Science with the support of/along with fellow design researchers, data scientist extraordinaire Simon B Johnson, open-minded organisations and wonderful research participants. These experiments helped layer lived experiences of people, our own observations and automated information from trackers/ routinely collected structured data / open data sources/quant surveys. We think this type of conscious comparison between measurable information and lived experiences can help designers identify the most compelling design possibilities, and explain them to decision makers (which, let's be honest, is the quickest path to make change). It also helps us be more critical of new interventions.
The examples below helped us observe detailed textures without becoming computational (in time constrained situations). Some use pure data science, others just use quant data. We want to get better at this, so critical comments are welcome!
1. Pairing Narratives and Trackers
Discover: Fit Bit Records + Energy Diaries
While at Helen Hamlyn Center for Design, we were conducting a project about defining a new opportunity area for Stannah Stairlifts related to activity & wellbeing for older adults. To start to understand what activity means for our participants, Ross Atkin and I decided to pair an activity tracker and an energy diary. Participants were requested to fill in the diary every time they felt unusually tired or unusually energetic.
At the end of 5 days, I visited participants with a printout from their trackers. We discussed and compared their routines and activities on weekdays and weekends, how they felt and reflected on their activity now versus previous phases in their lives. In the following 1 hour interview, I gained a mental model of our participant’s relationship with everyday activity, weighing scales, mirrors, how flexible their routines were and their aspirations for their future selves moving beyond the traditional categories of “sporty”, “yoga-lover”, “sedentary”. This activity also revealed a big insight: participants had specific vocabularies when it came to describing their daily activity and weight ( “lose 5 kgs”, “walk 3 km a day”) but when it came to body balance they were vague (“wobbly”, “feeling unsteady”). This insight along with others helped us start to design the body balance application.
I now know that gathering quant data and sense-making with the participant can help in revealing insights for both the designer and the participant and while planning research, I always ask myself the question: “Is there an opportunity to track behaviour unobtrusively?” See more here:
2. Developing Meaningful Usage Propositions
Define : 30-day app test + story cards + Visualisation dashboards
While working with Stannah Stairlifts, we wanted to bring to life the value propositions for the new-to-the-world body balance application/wearable. We were confident about the technical capabilities of the device, but still unsure about real-life use scenarios, and how they varied for people with vastly different capabilities and the associated design requirements.
So we requested 10 diverse participants to use the vanilla version of the application for 30 days without priming them about our intention with the device. At the end of 30 days, we analysed all of the data they generated in-app & their usage data. We printed this out and went to participant homes to discuss their experiences.
When asked to draw story panels of use, our participants discussed scenarios where they found the app useful: a faller said they used it to understand whether this was a stable day or wobbly day and decide their routine, a younger user said that the device helped them feel more confident in going out on their bike in the icy weather, and our oldest participant told us about how he did not like using the application since it reminded him about how unlikely he was to improve his balance. They also told us about their frustrations with the design.
The quantitative results helped us understand and explain the trend of use and the varied scenarios associated with use for the three segments and we used these to mock up a random walk algorithm which would deliver appropriate advice/training exercises to all three of the segments. See below;
I highly recommend this approach. If there is time, then extended sensitization before co-design to allow both the users and designers to discover and understand real life usage from the outset can be powerful!
3. Triangulating for Clarity, Leaving Lots of Room for Inspiration
Define: Medication Ingestion History + Customer Service History + Metaphor Cards
While working with Proteus Digital Health, we (me & Celine Pering) were conducting research with recently-cured Hepatitis C patients to understand how the Proteus offering could be customised to better meet their needs. To start to understand the lived experienced, Simon B Johnson helped us develop a custom dashboard which layered anonymised customer support tickets and adherence data. Simon helped analyse this data to reveal three types of experience journeys (from a customer support contact perspective).
We sent out diaries and cameras with questions and activities to participants and then met in their homes to get a better understanding of their real-life usage. We started by requesting participants to recall specific incidents when they had contacted customer support (we had printed out a timeline) and walk us through what happened on the day, how they felt, whom else they called and how they felt.
This was powerful since we discovered pill rituals, frustrations and delighters. To broaden the discussion and get design inspiration, we had developed a set of metaphor cards. We requested each participant to talk us through their journey using the metaphor cards.
These cards were used in unexpected ways by participants and helped us understand the motivations and stressors for somebody who has opted for Hepatitis C treatment in an entirely new way. Through the cards, we learned that different generations had very different experiences being treated for HepC. The Baby Boomers had a much longer and difficult journey with multiple stages up to 7 cards. Millennials often had only 1 or 2 cards to describe their treatment experience, given it was quicker to begin (fewer hurdles in being approved due to more liberal conditions) and they were less sick from the virus (since they were treated early). The metaphor cards allowed triangulation that was very useful in challenging deep-set assumptions that would not have surfaced otherwise.
4. Gathering Beautiful Evidence*
Discover: Photographs + Custom App + R
I have so many examples from the Balance with Stannah Stairlifts project, but here is one last method. Early on in the project, we wanted to better understand how and to what extent did body balance differs between people, and how discernable that difference was to middle-aged and younger people. To accelerate our learning, we decided to simultaneously photograph our participants (using a 10-second long exposure) to create a self-mapping image while collecting accelerometer data from the participants for 30 seconds on Ross’s custom app.
Since the participants were photographed on each leg, we could compare the right and left leg results visually on camera, via the app’s sparkline visualisation and participants self-assessment. This helped us get a understand and explain differences in even before we conducted detailed data analysis and roped in a data scientist.
The term beautiful evidence comes from Tufte’s book Beautiful Evidence where he explains how to produce and consume credible and engaging evidence. This is important since we and our stakeholders need a simple ‘in’ into our complex insights. This is why gathering self-mapping data in the form of images/drawings is something I am keen to further explore. For example, here are some drawings of participants of their body balance. These combined with quant data paint a powerful picture of the participant’s relationship with body balance.
if you decide to keep going the next three methods are about using data to understand organisational & socio-technical complexity.
5. Unpackaging & Highlighting Perspectives
Develop: Collages + Survey
During a recent project with a Biomedical Charity, the informant group was composed of scientific experts in different subject matter fields including global experts in machine learning, genomics, cell biology, applied clinical research, hospital management, global population health and health informatics. While all of the experts had shared issues, they approached them from different perspectives.
We were going to be co-designing a road map with these experts, for this activity to be meaningful, we had to:
a) Be aware of differences in the experts' perspectives (so we could account for biases)
b) Raise awareness between the experts of the varied needs of the group to inform their choice + we had to do this remotely.
To increase empathy between experts, we decided to develop collages which visualised the current context of the three possible intervention spaces, accompanied by concepts.
We developed an accompanying survey which contained open-ended questions, and a ranking activity. Each intervention area was introduced with an accompanying question encouraging the respondent to identify points of view which resonated most with their own experiences. The emergent road map was critically reviewed with the broad stakeholder team paying attention to the perspective of each respondent. By socialising the varied motivations of the informants, we were able to synthesize a new intervention space with broader appeal.
6. Quick Organisational Immersion
Discover: Python + Pre-existing MSF Data
While working with Medicine San Frontiers Innovation Advisors (based in Brussels, London, Paris and Geneva) and Innovation Departments (based in Sweden, Japan and Nairobi), we wanted to quickly immerse ourselves in the context of the departments. To get this understanding quickly, we decided to tag their projects and review them as a network diagram and learn about thematic links between their portfolios. Simon also developed a spider diagram running off a pre-existing survey visualising each unit’s project focus and project stage. The resulting visualisations helped us ask questions about collaborations and communication practises between units, discuss similarities and differences in their approach to incubation/testing/scaling of new ideas and methods of spreading design thinking. We were able to confirm our hypothesis that there are overlaps in project areas
Later on in the project, we wanted to better understand the shape and scale of the 468 MSF projects, which were operational in 2018. We treated each project like a ‘user’ and analysed their staff numbers, national: international staff ratios, project locations, tactical families, and log typologies and staff nationalities. This helped us learn about what a mainstream MSF project looks like (100 or more people, 94% national staff, between 0–2 years old) and contrast it what an ‘extreme’ MSF project looks like; (less than 20 people, 100% national staff..).
This understanding helped inform the subsequent operationalisation strategy which accompanied our proposed solution: “MSF Makes”. Now, I always try to ask myself the question: “What are the existing internal datasets which we can analyse to ensure our solution fits better with the organisation ?”
7. Design Impact
Develop: Data + Maps
Recently, I have spent some time working on a freelance (not a Hetco project), in Mozambique. The project is about Maternal Mental Health and headed by Tatiana Taylor and Jak Spencer from Helen Hamlyn Center for Design. In this project, we want to co-design interventions to improve wellbeing during and after pregnancy for young mothers. To do this, we intend to follow typical Human Centered Design practises and co-design with users.
Currently, we are trying to get permission to use the data collected by CISM as part of a twenty-year longitudinal study to better hypothesise about the reach /usage/appropriateness of the service based on attributes such as education level, distance to the health centre, distance from educational services. Having this understanding can help us gauge the impact of different types of interventions. We are just at the start so watch this space!
Geolocation and mapping is something which I am excited to explore. There are several open data sets which exist and can help inform design research. We used the indices of multiple deprivation maps (2015) to inform a quick review of the Library of Things customers and borrowing patterns, while in San Francisco I mapped homeless users living locations around the city as a mini exercise.
We (me and Simon B Johnson) have a lot of other ideas for integrating data and design research; Like using an Internet of Things button to track any repeated behaviours or using temperature sensors to learn about thermal comfort.
I would love to hear thoughts from fellow design researchers, strategists and people who are generally interested in uncovering real issues and needs and finding new solutions about the above ‘experiments’! Next time I will write about the more traditional data science (Like PCA) and design research methods (like random generators) that we use!
Many thanks to Ross Atkin & Celine Pering for being excellent collaborators! and to Simon B Johnson for your patience and helping shape half thought out ideas!