UX Research for Comments Sections

How we combined qualitative and quantitative methods to understand Financial Times readers’ relationship with news comments sections.

Amina Amid
FT Product & Technology

--

Comments sections are a contentious topic in the world of digital news media. On one hand, they can provide a forum for public debate and user interaction, creating connections with and among users. Financial Times readers often leave invaluable comments on the wide range of topics we cover — from technical analysis to film reviews — and for many, reading others’ inputs has become a complementary experience to reading our articles. On the other hand, online comments sections in general are often cited to be hostile places, with media research showing that argumentative remarks deter users from taking part in online conversations. To investigate the role that comments sections play in our user experience, the FT’s Storytelling team set out to conduct discovery research on the topic.

Here’s how we approached our study:

1. Gather secondary sources

We didn’t have to look far to get started in the world of comments. Existing research into reader communities, behavioural data and, of course, user comments all helped to get a picture of commenting in the FT. With this, we mapped the team’s assumptions and articulated opportunity areas, such as comments section quality, women’s experiences, comment reading experience, and comment writing experience.

That said, comments sections exist in a wider ecosystem of online discussions that make up users’ overall experience. For readers, the FT plays just one small part in interacting with others online. To get a clearer picture of the online commenting sphere, we drew on research from other fields, like communication studies, psychology and social sciences. Academic researchers, for example, have already studied how social characteristics like gender can shape online interactions. Women are less likely to comment on the news online, particularly when it comes to topics like national and international affairs, despite likely equal levels of interest. Prior experiences can in turn shape how users approach our comments sections.

A drawing of a pair of hands sorting through a pile of papers.

Understanding where the FT sits was important if we were to attempt making the world (of comments) a better place.

2. Elicit group discussions

We conducted two group interviews with participants who left comments (“commenters”) and participants who had read them but didn’t write any themselves (“non-commenters”).

Speaking to non-commenters ultimately meant we were looking to conduct research in a forum with people who may have been reluctant to share their opinions in a public setting. In fact, commenters noted the similarity between their group and the FT’s informative comments sections. To mitigate any apprehension towards group discussion, we kept the groups at 5–6 people each, which in turn kept conversations intimate (and even ‘exclusive’). We also decided to interview commenters first, so we were able to use insights of one group as prompts for the other. This helped to drive the discussion and measure gaps in the user experience among less engaged readers.

Reader communities can have a distinct language the researcher doesn’t speak, so group settings are helpful for surfacing in-depth, unanticipated insights. Examples participants bring up can also prompt others to recall their experiences with specific comments; this may be necessary if commenting is second-nature, rather than something someone actively thinks about. For the discussion to work, however, it was important not to recreate the imbalances of online comments sections we had seen in our literature review, and instead amplify user voices. This not only meant having a diverse group of participants and perspectives, but taking care to construct the groups in a way that made participants feel comfortable to share these perspectives in the first place.

Using framework analysis (a method that uses a matrix output to map data to user typologies like “commenters” and “non-commenters”), we then identified themes to answer our research questions, and presented these in a summary grid that pulled users apart on certain points, like the value they saw in comments or pain points they had. This meant we could surface patterns and pull out what factors the key differences between groups boiled down to. One of these factors, for example, was user perceptions of online comments sections’ quality.

A drawing of a table with “value” and “pain points” as two column headers “commenters” and “non-commenters” as two rows.

Summarising our findings in this way also meant we could identify blind spots relating to our opportunity areas and formulate hypotheses for statistical testing.

3. Quantify findings

Groupthink (wanting to conform to group consensus instead of voicing true opinions) can happen in any study involving group activity. We needed to eliminate the effects of this, which meant supplementing our data with quantitative insights. Using a mix of behavioural and survey data, we tested the hypotheses we identified at the previous phase.

Having done the qualitative work made the process of operationalisation (turning abstract concepts into measurable variables) easier, since we could quote our users directly to find out how many others had the same opinion. We turned phrases we heard in research sessions, like “feeling a sense of togetherness with other readers,” into scale-items that would make up broader concepts, like community. Using regression analysis, we compared and visualised these factors’ effects on quality ratings:

A sketch visualising regression coefficients of the following predictor variables: findability, community and moderation.

Of course, we had now moved from in-depth qualitative data to numerical data tied to abstract concepts. To maintain the richness of our findings, we had to synthesise the information collected at all stages.

4. Bring it all together

As we progressed through these stages, we were gathering A LOT of rich data. With open-text answers added to our data pool, we now had about 100,000 words of feedback to sift through to supplement our statistics. To decide where to look deeper, we combined computational methods, like word frequencies and concordance (looking at the context in which keywords appear), with our earlier analysis. For example, we noticed that survey respondents with certain scores referred to comments as informative. This was good to know, but it didn’t tell us how exactly we could locate an informative comment. Looking at the context of this word, however, showed that informative is in concordance with debate, links, discussion, industry and experts, which brought an abstract concept closer to concrete terms.

A drawing visualising concordance analysis, which shows that the word “informative” is used in concordance with the following words: debate, links, discussion, industry and experts.

These concrete terms could then be defined further with the insights we collected in interviews, and synthesised to tell the full story of our users.

Conclusion

To research a topic like commenting, it was important to acknowledge the ecosystem in which our product lives. Blending insights from existing literature, group discussions and quantitative analysis gave us a clear picture of the ‘what, why and how’ when it comes to commenting, and bringing in users’ related experiences helped us understand where the FT is situated in the world of online discussions. By exploring the perspectives of both commenters and non-commenters, with a focus on specific opportunity areas, we were then able to pinpoint the factors impacting the user experience. This shed light on how we might help more people see comments as many of our readers already do: an enriching part of their interaction with the FT.

--

--