How can we make quantitative research more trauma-informed?

How Chayn designed a survey about abuse that over 50k people responded to

Jenny H Winfield
12 min readJul 31, 2024
An illustration of a woman smiling, sitting at her laptop

Last year, as part of the ongoing work Chayn are doing to deepen our knowledge about technology-facilitated abuse, we ran a major survey to hear from survivors around the world. We wanted to gather insights about the degree to which people using online dating apps had experienced tech abuse and we worked with our partner Bumble Inc., to reach out to them.

What is tech abuse? It could look like old-fashioned harassment and stalking but delivered via digital means, such as on WhatsApp, email or social media. It could be abuse that was inherently digital, such as doxxing (personal info being shared online without the person’s consent), sharing of intimate images online without consent, and deepfakes (creating and sharing images and videos that manipulate a person’s likeness using AI to make them appear to be saying/doing things that they never did). Ultimately, being an organisation that offers support at the intersection of technology and abuse, Chayn wanted to identify how we can do more for survivors in this space.

The specific insights from that survey will hopefully come out soon, but for now I’d like to share some more process-related tips about running a large quantitative (quant) study in a trauma-informed way. Researchers don’t always take the same amount of care when we ask questions of someone in a survey, as we would when we talk with them. I have been thinking a lot about why, and I think we can do better. I hope this piece will help to change that.

If you’d like to learn more about our approach to trauma-informed design, check out Chayn’s user research blogs, our free public workshops and our white paper.

1. Consider informed consent as carefully as you would with qualitative research

Often when it comes to quant research, researchers are under pressure to get a large number of responses. There’s an assumption that the more data you have, the more reliable the results (and the more people will take your insights seriously). One way that researchers try to achieve that high completion rate is to reduce the amount of time that the survey takes to complete. The hope is that people get all the way through questions before they get distracted, bored or annoyed.

So to bring this overall time down, we review how crucial each question is, and we might make the case for removing some. I’ve also seen researchers limit the amount of up-front information given, so that respondents can quickly get to the main survey questions. But I would caution against doing this, if the focus of your survey means that you’ll touch on trauma.

That’s because in quant research, the introductory information is where you’ll be setting respondents up for informed consent.

In our survey, we shared a content warning about what the survey questions entailed, including the fact that we would be asking respondents about abuse. The wording we used was:

ALT: Within this survey, there are questions focused on the topics of sexual abuse, assault or harassment. We ask about the experiences that people using the Bumble app may have had of these things. Answering is entirely voluntary, anonymous and we encourage you to only share what you are comfortable to share with us.

When the results started to pour in, Typeform warned me that we had a 10% drop off when people read that text. I really welcomed it.

Drop off is considered the enemy in surveys. It’s dreaded. But in this recent case, I was very happy to see it. And, we still had 56,649 people respond (helped along by Bumble’s massive reach).

ALT: A slide from our results deck, which highlights the fact that we had a 10% drop off when we gave a content warning — which is a good thing.

I’d recommend that you do everything you can to ensure that people know what they’re getting into when completing a quant survey about any topic, but especially if it’s going to ask highly sensitive questions or touch on trauma. The goal is not to have respondents race to the end, it’s to have them take part in an informed and empowered way.

If you think that sharing information may cause people to drop off, then don’t conceal this information — share it upfront. Yes, you may achieve fewer responses, have to leave your survey open for longer, or think about incentivising it differently. But remember, if someone starts your survey and then begins to feel unsafe and quits, you will have an incomplete response anyway. Sharing upfront and inviting informed consent means you can go some way to avoid causing harm with your survey. You can also build trust in this moment, by sharing your approach to how data will be stored, anonymised, and who will have access to it.

In trauma-informed quant research, we should always be empowering respondents to decline answering questions, and invite them to stop the survey completely at any time if they want to. Having a consent check in mid way through is nice, as well as having section headers which explain which questions are coming next.

We talk more about how we approach informed consent for qualitative (qual) research in our trauma-informed user research blog series — particularly in the pieces on Safety, Agency and Power Sharing.

2. Tell respondents why you’re asking those sensitive questions

So often in quant surveys I see people launching into highly sensitive questions without enough care and consideration for how these will impact the way the respondent feels. I even see surveys about trauma being labelled as ‘exciting’, which just feels so off.

Considering that so many of us work in UX, we could be much more thoughtful about the user experience of completing our surveys.

Designing your questions and your copy with care is really central to a trauma-informed approach. Are we asking questions with the context that reveals why we want to know those answers? Are we transparently sharing where this data will go, how it will be used, what all of this is for?

Sharing the reason for your questions can help respondents to see a bit about who you are, and it’s another opportunity to build trust. Since we are not there in person to answer questions that a respondent might have as they go, we can try to anticipate questions and provide good answers throughout.

In the example below from our survey, you can see that we explained why we were asking about gender identity.

A screenshot from a survey question that asks ‘Please select one or more options that reflect your gender identity’, with the description underneath ‘We’re asking this to better understand how our users identify, with a view to designing even more inclusive and gender affirming resources. You can select ‘prefer not to answer’

We gave the description of why we are asking about gender, because it needs saying. Often people who have been marginalised based on their gender have experienced disproportional violence, and have also been excluded from support services based on their identity. What we are doing at Chayn is the opposite — we’re asking in an effort to include them and center their needs.

We should be able to clearly justify each question in a survey — if we can’t do this in a way that would make total sense for a respondent, then we should not be including the question. This goes for all quant research but especially trauma-informed quant.

If you can’t think of a reason for a question that you’d be happy to share with a respondent, then don’t ask it.

3. For multi-language surveys, work with a localisation team who understand trauma

The survey work I’m talking about in this piece was a collaborative effort between a team of many people within Chayn and Bumble. We launched our tech abuse survey in English, French, German, Spanish and Portuguese, and this meant that we received responses from people in 87 countries.

Our approach to the project was to build each of the surveys as a distinct piece of work, led by either a staff member or volunteer from within Chayn. This meant that we worked with 5 individuals (plus additional proofreaders) who speak each of the respective languages natively and who, crucially, understand gender based violence.

In short, we did not design the survey in English and outsource it to be translated by another organisation or individual freelancers who were disjointed from our mission. We knew that to create a trauma-informed survey experience in multiple languages, this would not work.

Straight translations are often inaccurate, miss context and sound clunky. The last thing we wanted survivors to think when answering our survey, was that Chayn as an organisation could not be bothered to get the content accurate for their language — it would create a feeling of being unseen, and unimportant. This is where our trauma informed principle of accountability helped to guide us.

There were many words and phrases that we discussed as a team in the design phase of the survey. There were descriptions of abuse that just didn’t make sense in all languages. Some words are unknown in one language but not in another, or there wasn’t a literal translation — for example ‘Gaslighting’. With this one, we discussed that Gaslighting as an experience does exist across multiple cultures, even if there isn’t a specific name for it, so we used the English word and gave the term some extra description in the non English surveys.

We discussed terms that are gendered in languages such as Spanish and French, but not in English, such as “abuser” or “agresor”. These required a conversation between translators, to explore how we could try to keep terms gender neutral. We decided as a first option to re-phrase the description in a way that implied a neutral form (so, ‘the person who…’).

One of the big challenges was balancing the fact that we were aiming for a single cohesive data set at the end of the project to make sense of, with allowing for enough nuance in the language for the survey to feel right locally. We wanted to honour the needs of survivors in different countries and languages while creating results that could be compared to each other fairly easily.

Working with a team of people who share Chayn’s mission and really understood the purpose of the work, as well as the regional and cultural nuances of abuse, was vital. It helped us to succeed in hearing from so many survivors in a sensitive but structured way.

I was so lucky to work with and learn from Paz Romero on this translation and localisation piece.

4. Don’t over-probe just because you can

Quant research has this strange aura of being more detached and mechanical than qual which can persuade us, on our less caring days, to ask for more than what’s reasonable. When we’ve finished asking the core questions in our survey, we might be tempted to start asking some broader questions, just out of interest. Or, we might ask multiple questions that needlessly delve into painful experiences. This is extractive, and harmful.

Of course, in qual we can also be guilty of over-probing, but there are usually signs of participant discomfort which can act as a deterrent. With quant, we don’t see or hear these signals.

I’ve noticed this over-probing a lot with surveys about women’s health. Do you need to know all of my specific conditions, or are you mainly interested in whether I am managing multiple medications? If it’s the latter, just ask about that, and share why it’s relevant. Do you need to know how happy/unhappy I am generally with my doctor’s level of care, or are you interested in whether I’d accept some level of care from an automated service? If it’s the latter, just ask about that and share why it’s relevant.

Every time we probe more, we raise the potential to cause a harmful memory to resurface. We should ask ourselves, do we really need to know this?

In our survey we could easily have asked a question about how many times a respondent had experienced sexual abuse or assault, in total in their life, or how much their experiences had affected their overall confidence, just because it’s interesting. I found myself drafting a question like this:

Thinking about the instances of abuse that you’ve experienced, how have they affected your overall feelings of confidence?

But, we knew that answering these kinds of questions was likely to be very painful, and there was no real need for us ask them. Instead, we thought about what we were really trying to learn about confidence. We decided that it was whether A) survivors felt confident to challenge perpetrators online who were making them feel uncomfortable, and B) whether they would appreciate more support in building this confidence. This information is much more specific, relevant to our mission and actionable by our team, so we asked those questions instead.

We should limit the number of questions we ask about sensitive topics, because each one costs something for the participant to answer. We should be careful not to add in broad questions about the impact on someone’s life, or very detailed questions about trauma, if we don’t really need that data.

5. End on a helpful, hopeful note

Think about how your last few questions will feel for the participant to answer. If you’ve covered some difficult areas, could the questions now ease into a lighter topic or tone? At Chayn we often ask questions such as ‘What makes you feel hopeful?’ and ‘What kind of support would you like to explore in the future?’ at the end of our survivor surveys. Think about the emotional arc of the survey and the experience you’re hoping to create.

On your end screen, always thank the participant warmly for their contribution, acknowledging (as you hopefully did at the start) that sharing is not always easy, but that you’ll do your best to make it worth it — with whatever you’re designing or improving.

If you can, offer good resources for them to turn to at the end. Maybe it’s those that your organisation provides, or maybe you take the time to research, vet and signpost to some relevant ones. Don’t leave someone without a caring ending, when they may be feeling quite vulnerable and exposed.

A screenshot from the end of our survey where we directed people to further resources and thanked them for taking part. We said ‘Your responses will help us provide relevant content and safety support for poeple who have traumatic online dating experiences’

6. In your analysis phase, be mindful of secondary trauma

Often when we talk about vicarious trauma as user researchers, we zone in on the risks posed by qualitative work; the sessions where we sit down with someone to hear their story. We mentally prepare for hours of chatting and rich conversations veering in directions we need to stay steady for. We think about our resilience (I personally try to sleep a lot during qualitative research weeks, and remove potential stressors such as working away from home). We schedule breaks. We buddy up for emotional support after a particularly tough session.

But I don’t think we talk about vicarious trauma often enough with quant research, and our experience of analysing data about trauma. To me, data analysis on the topic of abuse feels like it poses a very high risk for secondary trauma in researchers.

Sitting alone and silently scrolling through tens of thousands of data points, all with a person and a story behind them of abuse, can be horrifying. The potential for this to activate our own traumas, if we are someone who has experienced anything akin to what we’re reading, is very strong.

The sheer scale of the abuse, and reading open ended responses with people asking for help, can feel overwhelming. When I first started working at Chayn, I really appreciated how they named this.

Chayn said, there’s potential for you to feel secondary trauma just from doing desk research or reviewing data. In fact, when you’re doing any task related to your job.

It helped me to bring this up when I was briefing our freelance data analyst for this work. I asked her to specifically keep an eye out for the signs of vicarious trauma and to take her time with the analysis. This is an excerpt from the brief I wrote for her:

A screenshot of the briefing that I wrote for our freelance data analyst. It outlines that this work may be upsetting, as well as some options for support, including group and individual therapy sessions

We must take care of ourselves and each other while navigating this work. Embedding our processes with more of a relational approach (vs an extractive mindset) should normalise the tendency to thoughtfully consider our respondents’ feelings too. We can almost always do more to design a positive experience of taking part in a research project.

Overall, it’s clear to see how the trauma-informed design space is gathering pace, and that the role of UX researchers is critical within that. So many of us are becoming more aware of the steps we can be taking to practice safe-enough qualitative work around traumatic topics. I’m excited to see more critical conversation around how we see quantitative research fitting in to this overall picture. We will be doing more writing soon about trauma-informed quant — given that survey tools are so heavily relied on in our sector, and data carries so much power, it’s a topic that deserves a lot more thinking and care.

Our 6 tips for doing trauma-informed quant research are:

  • Set up informed consent, just as you would in qual
  • For sensitive questions, offer the context for why you’re asking
  • Don’t over-probe, just because you can
  • Work with a localisation partner who understands the topic
  • End surveys on a helpful, hopeful note
  • Be mindful of vicarious trauma in your data analysis phase

What did we miss?

Get in touch with jenny@chayn.co if you’d like to talk more about trauma informed quantitative research.

--

--

Jenny H Winfield
Jenny H Winfield

Written by Jenny H Winfield

Trauma informed Research and Strategy for creative endeavours. Specialist researcher in taboo topics and with underserved audiences. Chayn HQ. Ex-IDEO.

Responses (1)