How do you ask a “good” research question?

While there are no bad questions, some are better than others.

Joyce S. Lee
Designing Atlassian


Asking questions requires humility, an embracing of conscious ignorance. But it’s an essential practice for learning about our customers, and even ourselves. Key actions to asking better research questions include checking baseline knowledge, challenging hypotheses and hunches, as well as leaving room for the unknown.

Think of a question.

Perhaps it’s something immediate like, ‘What snacks do I have in the kitchen?’ Or maybe something broader, like ‘What will my life look like five years from now?’

We ask questions all the time, to ourselves and to others. At Atlassian, the nature of inquiry varies in scope too, ranging from the specific — like ‘Which design should we use?’ and ‘What feature is most important?’ — to the broad, e.g. ‘Who is the primary user of this product?’ ‘What are their goals and common tasks?’

The nature of our questions reveals what we prioritize, what has our attention. Re-examining questions can lead to re-considering possibilities, for instance, at a shorter or longer timescale. Alternatively, it can highlight a new perspective: this is particularly true when questions are explored with others, whether you’re having an informal chat with a colleague or a heart-to-heart with your therapist.

Researchers have an affinity for questions, particularly questioning questions. It’s an essential part of scoping projects, in addition to factors like available time and resources, the stage of the product-development lifecycle, etc. While there are no bad questions, some are better than others. Key actions to asking better research questions include (1) Checking baseline knowledge, (2) Challenging hypotheses and hunches, as well as (3) Leaving room for the unknown.

“Being willing to question is one thing; questioning well and effectively is another.”

– Warren Berger, A More Beautiful Question

1. Checking baseline knowledge

When you first detect a need for research, you likely have some existing knowledge of the area. In fact, your team’s collective understanding of the topic may add up to a lot. Reviewing relevant insights from prior work and learning from internal subject matter experts are general best practices that can save time and avoid redundant research.

But it’s still important to actively question what you know. Why?

  • We often experience source amnesia — we remember the what of information, without remembering where we learned it. Knowing the where around a truth matters because it provides context on the information’s reliability. Did that theory come from a well-researched podcast episode or a speculative clickbait headline that you saw on Twitter? Human memory is fallible, and we might incorrectly remember random ideas as facts.
  • The only constant in life - is change. For instance, COVID-19 has not only normalized organizations offering remote work policies but also workers re-evaluating what they expect from their jobs. The pandemic is just one example of how human behaviors and social norms can fundamentally change: we need to be open to the fact that what we may have accepted as “truths” may shift over time.

As our knowledge increases, we often realize that something seemingly straightforward is actually quite complex and nuanced. As Aristotle wrote, “The more you know, the more you realize you don’t know.”

Be like Buzz here, not Woody

Questioning can seem uninformed or insubordinate, depending on the tone. But it’s a critical step to assessing assumptions and understanding what merits additional investigation. Identifying your unknowns will also help identify the size and risk of your knowledge gaps. This in turn can help prioritize your time and effort, which are generally limited in supply.

Where understanding of the problem is high, the best course of action may be through iteration, described in the prioritization framework below as ‘Ship it and Measure’ or ‘Design Heavy.’ Research should mostly be dedicated to areas where understanding of the problem is low — particularly where the risk is high (‘Research Heavy’).

Evaluating baseline knowledge enables easier research prioritization (source)

How to do this in practice

Assumption mapping is a useful exercise for teams to create a shared understanding of the known and unknown. Less formally, you can also try thought exercises like these:

  • What is our oldest data point or insight? What has or hasn’t changed since then?
  • Where did we learn what we know? How valid are our sources?
  • Is any evidence contradictory or contrary to expectation?

2. Challenging hypotheses and hunches

Once you’ve questioned what you know, you should have a better sense of what’s uncertain and requires further investigation. While these may be general knowledge gaps, they’re often framed into hypotheses or educated guesses.

Within the sciences, a hypothesis tends to come from a mindset that the world can be described objectively. In this belief system, positivism is the default approach to making observations about the world. This is a skeptical perspective centered on falsifiability — that we can prove assertions wrong, but we can’t necessarily assert they are true.

This form of empirical investigation is what we associate with the scientific method. As an example, let’s consider a childhood science experiment involving gummy bears. In this scenario, we might hypothesize, “If we soak gummy bears in liquid solutions for 24 hours, then they get bigger in size.” Looking at the sample results below, we have some supporting evidence that this is true.

But we can’t assert that this is universally true, as the last ‘salt water’ gummy bear on the right appears actually smaller than the ‘control’ gummy bear on the left. We also can’t conclude that water or vinegar will make the gummy bears increase by the greatest amount. They may be local maxima, rather than the global maximum because there may be another, not-yet-tested liquid that makes gummy bears get even bigger.

Edge cases like the last ‘salt water’ bear make it easy to disprove — rather than prove — claims (source)

What does this mean in the context of Atlassian?

Teams running experiments do apply a rigorous approach to hypothesis testing. For instance, our Buyer Experience team runs experiments to ensure new web pages “do no harm” on metrics like evaluation and purchase rate — compared to the status quo. We hypothesize that new pages won’t do worse, and as long as they don’t, we confirm the hypothesis.

More often though we refer to hypotheses more casually, without the original approach of hypothesis testing in mind. Colloquially, we might say ‘hypothesis’ when we mean ‘hunch.’ Regardless of what term is used, it’s important to remember that you should remain open to evidence that disconfirms or challenges what you think.

We only grow our confidence as we find more supporting evidence over time — particularly across different disciplines and channels like customer support tickets, product analytics, qualitative research, CSAT surveys, and so on. But even as you become more confident, you shouldn’t stop asking questions. In fact, you should expect our questions to evolve as you continue to learn over time.

What’s the problem with ‘validation’?

Well, it’s not always a problem; like most things, it depends on the context. But validation implies an implicit bias towards already knowing what the answer or solution is. This makes sense where certainty is relatively high.

However, oftentimes teams might think about ‘validating’ when a ‘testing’ mindset would be more appropriate. In these scenarios, research can be called in to generate evidence, making it biased toward confirming hunches. Alternately, one might ‘cherry pick’ evidence from existing data to ‘validate’ claims. These actions might help you feel good about your work, but ultimately do not serve your customers.

We should welcome both positive and negative feedback (source)

How to do this in practice

  • When thinking about testing hypotheses or hunches, check your bias by using balanced language — for instance: “validate or invalidate,” “confirm or disconfirm,” etc.
  • Likewise, avoid revealing your hypotheses or hunches with customers, as social politeness may lead them to agree with you — even if it’s not what they truly believe.
  • In lieu of qualitative ‘validation research’, consider exploring how you might validate via analytics or other quantitative methods.

Leaving room for the unknown

Even if you have some knowledge about a topic, you don’t need to have a hypothesis to do research. Another approach is to center on ‘why’ types of questions — leaving room for unexpected insights.

While there is strength in mixing research methods, qualitative methods tend to be better for open-ended inquiries. These rely on interpretivism as the primary approach to explaining the world. Within this belief system, we acknowledge that the underlying thoughts and emotions that dictate human behavior are not objective in the same way that phenomena like gravity or thermodynamics are. But through repeated experiences, we come to believe in the stability of social norms as we do of scientific theories.

Qualitative research is purposefully deep rather than wide, not a “consolation prize” when you don’t have enough users to do a survey. It can more readily explain beliefs and behaviors that may be surprising, revealing your blind spots. As an example, consider how cultural differences in the U.S. and Taiwan associate opposite meanings for the colors red and green. This type of insight might not be articulated within a preconceived hypothesis, but would nevertheless be useful for teams working on a stock performance display to know.

Research allows you to learn what you might not realize you needed to know (source)

What does this mean in the context of Atlassian?

We learn a lot from doing open-ended discovery, expanding focus beyond specific features within our products. One approach is to investigate how customers use competitor products, which encourages us to think beyond the confines of Atlassian-specific thinking. But just watching people use our products can be incredibly enlightening as well — to see how products look when populated with “real” data, as well as how they’re used in a broader ecosystem of all the tools used for work.

Watching people use our or competitor products also allows us to see when and how they use the product “wrongly”, or not as intended. These can often be creative workarounds to problems they have or deficiencies they experience in our product. Facebook Meta Product Manager Simon Cross refers to this as the “paving the cow path” — when users signal what they desire.

While it’s a nice gate, people defy designs in unpredictable ways (source)

How to do this in practice

  • When conducting research, leave some room for open-ended discovery (e.g. text field questions on a survey, broader discussion before walking through a prototype, etc.)
  • Feel free to go “off script” if a research participant reveals something unexpected.
  • If a participant turns out not to be a perfect match for your target profile, think about what else you can learn from the research session.
  • If you don’t conduct customer research of your own, try watching others’ sessions. We suggest building a habit of observing at least 2 hours every 6 weeks, as recommended by Jared Spool, UX expert and educator.

Stay curious

Good questions often lead to more questions, as a drop in water creates cascading ripples. But not knowing should be a cause of excitement, not frustration or despair. As neuroscientist Stuart Firestein writes in Ignorance: How It Drives Science, one of the keys to discovery is the willingness of scientists to embrace conscious ignorance — and to use questions as a means of navigating to new discoveries.

“In an honest search for knowledge, you quite often have to abide by ignorance for an indefinite period.”

– Erwin Schrodinger, Nobel Prize-winning physicist

Even among non-scientists, we should welcome not knowing. New experiences enable us to grow. We are in a perpetual state of self-doubt when we challenge ourselves to do things we’ve never done before — whether that’s moving to a new city, running a marathon, becoming a parent, and so on. But being open to uncertainty is essential for learning and generating new knowledge, in both about our customers and ourselves.

Many thanks to Tim Dixon and Ann Ou for their thoughtful feedback on earlier drafts of this post.



Joyce S. Lee
Designing Atlassian

UX researcher at Atlassian and occasional writer; previously published in Logic, Quartz & Designboom. Amateur zinester, mushroom forager & scuba diver.