How a Meta Researcher can use Qual Spotter to help departments share research insights

We’ve just launched our Alpha of Qual Spotter — a qualitative user research data sharing platform — and this article describes how a Meta Researcher can use it to go beyond keyword fishing and connect departments with the right research insights.

BACKGROUND

As part of our development of Qual Spotter we’ve spent over a year thinking about how archived qualitative discovery data can be used beyond its original context of the project it was commissioned for.

During that time we’ve spoken with research experts from around the world and have developed a method to give archived data more life, and built an alpha version of the platform to deliver that method.

Communication/collaboration/openness is the point

The original problem that led to us creating Qual Spotter was the lack of communication between units within a large hospital in Melbourne — had they been aware of each other’s projects and the research that had already been done, a lot of time could have been saved in planning, and better final results would have been achieved.

Project teams weren’t actively hiding information from each other, there simply wasn’t an infrastructure set up for them to share what they had learned.

As we continued to build Qual Spotter, our main goal changed from making a queryable data repository to building a better way to have project teams learn from each other — collaboration, not data, is the key.

The Meta Researcher becomes an Insight Connector

We believe that while it should be possible for a Meta Researcher to comb through archived transcripts and reports (which it is using Qual Spotter), there is a better way to breathe new life into past data — give the Meta Researcher the tools to find knowledge gaps and let communication flourish between project team members.

Don’t start with Keyword Fishing

When during our own interviews we asked Researchers how they would find insights and patterns in research data that was not their own, or completely out of their project context, the response was consistent: they would fish for keywords and phrases and hope for good results. However, none of the people we spoke to thought fishing for keywords would yield useful information, and all thought the process would be tedious.

Some reasons why key words and phrases are unreliable:

  • Individual Researchers have their own taxonomy
  • Researchers have their own annotation methods
  • Researchers sometimes have to work within a variety of software throughout the interview, combing, analysis and reporting stages. This can also vary depending on their client
  • There are many aliases for tasks, feelings, touchpoints and motivations
  • Interview participants have unique ways of speaking and phrasing

When confronted with uncategorised transcripts, or inconsistently tagged transcripts the task of combing for insights seems insurmountable.

Start with ‘what do project teams want to know?’

If we help project teams find valuable, relevant insights, the process of knowledge sharing becomes simpler and reproducible.

There are two components to knowledge sharing:

  • Broadcasting what you want to know for your project
  • Creating insights from your projects you want to share (in a realistic amount of time)

Broadcasting what you want to know for your project

When a project is in discovery phase, it’s anticipated that there will be some preconceived ideas, hypotheses and questions from the project team. While it can be risky to frame discovery data from interviews with these ideas, there is value in recording these notions.

We expect that discovery interviews would be commissioned in the standard way, and the data combed without preconceptions, with patterns presenting themselves (the results being ‘pure’ discovery data, for want of a better phrase).

There is scope for those ideas, hypotheses and questions to be addressed with insights and experience from other project teams, to do the following:

  • Confirm patterns and concepts found in ‘pure’ discovery data analysis
  • Highlights other concepts that may have been missed
  • Alerts project teams to potential gotchas and pitfalls
  • Advises project teams of successes that could be emulated

Using Qual Spotter, Project team members would be able to declare what they want to learn about from other project members.

The structure of ‘what we want to know’

The structure is:

  • Context
  • Category
  • Title with keywords

Context radiates out from the specific touchpoint (device, another person, sign) through to universal:

  • CONTEXT — TOUCHPOINT (device, another person, signage)
  • CONTEXT — PLACE (building, town, step in process)
  • CONTEXT — SYSTEM (an organisation, company, system, whole process)
  • CONTEXT — UNIVERSAL (applies to any context)

Categories need to be few enough as to be memorable, and discrete enough to be useful. A potential set of categories could be:

  • GUIDING PRINCIPLES
  • JOBS TO BE DONE/TASKS
  • INTERPERSONAL

The Title should be descriptive, and conform as much as is feasible with the organisation’s taxonomy. That said, the categories and context will provide backup when it comes to browsing open projects.

Browsing open projects to find what people want to know

The Insight Connector starts by browsing open projects, and can refine the list by selecting different contexts and categories, then further refine by keyword.

Example: ‘Find me all of the open projects where the project team wants to know more about JOBS TO BE DONE in the PLACE context’ may return results like:

  • We want to know about entering large amounts of data in a system
  • We want to know about making decisions from data visualisations
  • We want to know about creating a new account
  • We want to know about making a new subscription
  • We want to know how a user might search for a job

From here refining by keyword can be more useful. For example, refining the list above on ‘new’ would return similar results, but ‘account’ or ‘creating’ will not return the very similar ‘making a new subscription’.

There should be few enough results that the Insight Connector can be more judicious when refining with keywords — it’s still fishing, but in a much smaller pond with clearer water.

Categories and context addresses the following:

  • Project team members will be unlikely to want to learn and conform to a new taxonomy for keywords
  • A feasible number of categories is required — few enough to be easy to remember, but enough to allow useful separation and grouping.
  • Accommodating keyword aliases that may otherwise be missed

Empower enthusiastic, talented and experienced team members

When undertaking research interviews and doing the initial analysis, the standard process is to have an experienced Researcher do the work, and then present that to the other team members to use in workshops and the like to explore the problem and solution spaces.

Outside of that process (particularly once the workshops are completed, diagrams drawn and reports written), non-researchers can also contribute insights.

Non-researcher team members are often technically proficient in the project’s field, and can have many years’ first-hand experience with users. Giving them a voice beyond the workshop ensures that they have an opportunity to share and express.

Many team members have an entrepreneurial mindset, and a desire to communicate outside of their team context. This attitude should be taken advantage of, as an enthusiastic team member with a tool to share will be more likely to do so, to the benefit of everyone.

The user-centered mindset

As an added benefit, regularly exercising these techniques will keep all team members in a the user-focused mindset, rather than being forced into the mindset then taken out of that space for months until next time.

Creating insights from your projects you want to share (in a realistic amount of time)

A Quick Insight is a statement made by the project team, with transcripts to back up that statement. They are designed to work as standalone pieces of information, which requires their scope to be relatively broad. They fall shy of maxims, but are clearly objective.

They are meant to be quick to create, and fast to consume. They support ‘pure’ research data and trigger conversations and sometimes provide answers, but most importantly, their creators are easily identified and contactable.

Quick Insights can be created from scratch, but we aim to structure them in a way that allows as much pre-existing report content to be repurposed as Quick Insights. We have looked at some standard research outputs and identified the following that could be repurposed with minimal effort:

  • The ‘Summary’ of a ‘Concept’ as described by Indi Young
  • Jobs to be done, as coined by Clayton Christensen

There are bound to be a few more, and as we collaborate with researchers and organisations more Quick Insight analogs will become apparent!

Creating a Quick Insight

A project team member can use Qual Spotter to create a Quick Insight from their project page (they can also add video and audio that Qual Spotter transcribes and transcodes, or upload their report documents and diagrams).

The structure of a Quick Insight is:

  • Context
  • Category
  • Title with keywords
  • Moments

About Moments

Moments are points in the video or audio as hosted in Qual Spotter, related to a point in the transcript. Most of the time this will be a quote from that point in the timeline.

A moment can be simply the quote and media, but there is the capacity to attribute task types as tags, and the researcher can also add their own notes.

Rewarding high fidelity, coping with lowest fidelity

Qual Spotter is designed to be effective even when the bare minimum of required data is entered, but if project team members complete more fields, the higher fidelity data eventually becomes more useful when querying data, and even further down the track, yield trends.

Allowing users to get the job done quickly and add more data later on if they choose gives Qual Spotter the best chance in user takeup and retention. Forcing a large amount of data entry to make the tool worthwhile will doom it to failure.

Like what you read? Give Paul Kotz a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.