Sitemap
Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

So What Actually Is a Questionnaire?

The science of questionnaires — and what UX often gets wrong

10 min readSep 18, 2025

--

Press enter or click to view image in full size
A researcher with a clipboard interviewing a participant, symbolising the rigour and ethics required in UX research and questionnaire design.
Behind every questionnaire lies a careful process of research, ethics and analysis — far more than just filling in a form.

Most UX surveys don’t measure what we think they do. This article looks at how psychologists design questionnaires, and why rigour and ethics matter more than ever in the age of AI.

I began my career as a psychologist in Human Factors at BT Laboratories in the 1990s. At that time, user experience design and customer research were carried out with scientific discipline and rigour. A questionnaire wasn’t something you dashed off in an afternoon or absorbed in a two-day workshop. It was the outcome of weeks of careful definition, testing and validation.

Today, the picture is very different. The rise of design thinking workshops and online survey tools has made it easy for anyone to create what appears to be a professional questionnaire. In workshops, participants are often shown a range of methods, including questionnaires, but in the space of two days it is simply impossible to teach people how to design, validate and analyse them in the way a psychologist is trained to do. And with a few clicks, an attractive online form can be produced that generates graphs and charts which look like evidence. Yet appearances can be deceptive. The danger is that many people creating these forms do not really know what a scientifically validated questionnaire is, or what separates reliable data from misleading noise.

In this article, I want to explain why this matters for anyone working in UX. Questionnaires are often treated as quick, disposable tools, yet in reality they can shape major business decisions. I will look at the common pitfalls that undermine them, explain how psychologists design and validate them, outline the different types of questionnaire, and highlight the ethical standards that govern psychological research.

For UX professionals, the value of understanding research methods isn’t in becoming statisticians — it’s in building credibility, avoiding misleading data, and showing why UX insight matters to the business.

Artificial intelligence now adds another layer of complexity. Tools that generate surveys in seconds are becoming more common in UX workflows, offering speed and efficiency but often without the rigour required for validity and reliability. For practitioners, the challenge is to combine the best of what AI can do with the deeper methodological understanding that comes from psychology’s research methods. By doing so, UX professionals can ensure that their insights are not only fast but trustworthy — and in turn, demonstrate to their organisations why UX research is essential to making sound business decisions.

What’s the Issue?

At the heart of the problem is validity. A questionnaire that looks professional is not necessarily measuring what it claims to measure.

If you are asking people about customer satisfaction, are your questions truly capturing that construct, or are you unintentionally measuring something else, such as short-term frustration with a call centre agent or the respondent’s general mood that day?

And questionnaires are just one method among many in the research toolkit. Depending on the question you are trying to answer, an interview, focus group, observation study, or ethnographic approach may be far more appropriate. It is crucial to understand the bigger picture of research design before reaching automatically for a questionnaire. For this reason, I would strongly recommend Business Research Methods by Alan Bryan and Emma Bell, which provides a clear introduction to the range of methods available and guidance on when a questionnaire is genuinely the right tool to use.

When organisations base decisions on data from poorly designed questionnaires, they risk drawing the wrong conclusions. Strategies are developed, products launched, and investments made on shaky foundations. The irony is that this rush to generate data can actually be worse than having no data at all, because leaders are given a false sense of certainty.

Why Might a Questionnaire Produce Invalid Results?

There are many reasons why questionnaires can lead to invalid results. Some of these are relatively straightforward, others more subtle.

Ambiguous or poorly worded questions
Questions that are vague or contain multiple ideas at once (so-called ‘double-barrelled’ questions) confuse respondents and muddy the data. For example: “How satisfied are you with your manager and your team?” conflates two separate relationships.

Leading or loaded language
Even a slight nudge in wording can bias results. Consider the difference between “Do you support the sensible new policy?” and “Do you support the controversial new policy?” Both questions purport to measure attitudes towards the same policy, yet they almost guarantee different results.

Researcher bias
The way in which a questionnaire is introduced, the order in which questions are asked, or the context in which respondents complete it can all introduce subtle biases. If participants sense what the researcher expects, their answers may unconsciously shift in that direction.

Interpretation bias
Even when the data is collected cleanly, the analysis can go astray. People without a statistical background may over-interpret patterns that are simply noise, or treat small differences as meaningful when they are not.

Sampling errors
Finally, there is the question of who you ask. If your sample is too small, too narrow, or unrepresentative of the wider population, your results will not generalise. For instance, surveying only your most loyal customers will give a very different picture from surveying the wider market.

Types of Questionnaire: Quantitative and Qualitative

Press enter or click to view image in full size
Close-up of a person filling out a multiple-choice questionnaire form with a pencil, symbolising structured quantitative research methods.
Quantitative questionnaires provide structure and comparability — but without careful design, they risk producing misleading results.

Not all questionnaires are the same. Broadly speaking, they fall into two categories:

Quantitative questionnaires
These rely on structured, closed questions with fixed response options (for example, Likert scales such as “strongly agree” to “strongly disagree”). They are designed to generate numerical data that can be analysed statistically, allowing researchers to compare groups, track changes over time and identify correlations.

Qualitative questionnaires
These use open-ended questions that invite free responses in the respondent’s own words. The aim is to capture depth, nuance and meaning rather than numbers. While harder to analyse systematically, qualitative questionnaires can provide rich insights into attitudes, experiences and motivations that might be hidden in purely quantitative approaches.

In practice, many well-designed studies combine both approaches, using quantitative items to provide structure and comparability, and qualitative questions to give voice to the human context behind the numbers.

How Psychologists Build a Questionnaire

Designing a questionnaire is a far more deliberate process than simply writing down a list of questions.

A questionnaire isn’t just a set of questions — it’s the outcome of a disciplined process of defining constructs, piloting, and refining until the results are reliable.

So how do psychologists do it? The answer lies in a disciplined scientific process.

1. Define the construct
Every questionnaire begins with a clear psychological construct: an abstract quality we want to measure. It could be job satisfaction, trust, motivation, anxiety, or customer loyalty. Constructs cannot be observed directly, so we measure them indirectly through carefully chosen questions, or ‘items’.

2. Generate an item pool
Psychologists then produce a large set of candidate questions, phrased in different ways. The aim is to cover the construct as fully as possible, while avoiding repetition or triviality.

3. Pilot testing
The draft items are tested with a small group. Which items make sense? Which confuse? Which elicit consistent responses? At this stage, many questions are discarded or refined.

4. Control for artefacts
Psychologists are trained to look out for experimental artefacts: unintended influences that distort results. For example, the order of questions can prime certain answers, or respondents may try to answer in ways they believe are socially acceptable. Controlling for these artefacts is vital to obtaining genuine responses.

5. Iterative refinement
Questionnaire design is never a one-shot process. It requires multiple cycles of testing, refining, and re-testing until the items measure the construct reliably.

How Psychologists Analyse Results

The rigour doesn’t end once the questionnaire is written. Equally important is the analysis.

Statistical validation
Psychologists use statistical methods such as factor analysis to see whether the items group together in ways that reflect the underlying construct. Reliability measures, such as Cronbach’s alpha, are used to assess whether the items are internally consistent. These methods may sound technical, but they are vital if you are to move beyond superficial responses and establish whether a questionnaire genuinely measures what it claims to.

For a deeper exploration of these techniques, I recommend Research Methods and Statistics in Psychology by Hugh Coolican, which offers an accessible yet rigorous guide to the statistical foundations of questionnaire design.

Different forms of validity
We also distinguish between different types of validity. Construct validity asks whether the questionnaire really measures what it claims to. Predictive validity examines whether scores forecast future outcomes. Convergent and discriminant validity assess whether the questionnaire correlates with related constructs but not with unrelated ones.

Cross-checking and replication
Finally, results must be stable. If a questionnaire produces wildly different outcomes when repeated, or if it only works in one narrow setting, it lacks robustness. Replication is the bedrock of scientific confidence.

Ethics and Testing

One aspect often overlooked in business settings, but central to psychology, is ethics.

Press enter or click to view image in full size
Ethical research in UX and psychology requires informed consent, confidentiality and respect for participants — not just collecting responses.

Psychologists are bound by strict ethical guidelines when designing and administering questionnaires. These include obtaining informed consent, ensuring confidentiality, protecting vulnerable participants, and being transparent about how the data will be used. Ethical oversight is not an optional extra; it is fundamental to protecting participants and to the integrity of the research.

Design thinking workshops almost never cover this dimension. Participants are encouraged to experiment with methods, yet there is little discussion about ethical safeguards or the potential harm of asking poorly thought-out questions. For example, questionnaires that probe sensitive areas such as health, finances or personal identity can cause distress if handled carelessly.

Ethical testing also extends to ensuring that questionnaires work as intended before they are rolled out. Piloting is not simply about refining wording; it is about checking that the instrument is safe, respectful, and appropriate for the intended audience.

Practical Guidance for Non-Psychologists

Not everyone designing a questionnaire is a psychologist, and nor do they have to be. But there are steps that anyone can take to improve the quality of their work and avoid the most common traps:

  • Keep it clear and singular: ask one thing at a time, using simple language.
  • Avoid leading language: keep the tone neutral.
  • Mix item wording: include both positively and negatively worded items to reduce the risk of automatic responses.
  • Check internal consistency: repeat key constructs in slightly different ways to test whether respondents are answering consistently.
  • Think about your sample: ensure your respondents are representative of the wider population you want to draw conclusions about.
  • Mind your numbers: a rule of thumb is that larger samples give more reliable results. Very small groups should be treated with caution.
  • Consider ethics: ask yourself whether your questions could cause discomfort, and be clear about how data will be used.
  • Do not over-interpret: be modest in your conclusions and aware of the limitations of your data.

These principles won’t turn a quick survey into a scientifically validated instrument, but they will improve its reliability and reduce the risk of drawing the wrong conclusions.

Why This Matters Today

In the world of UX, customer experience and organisational change, questionnaires are ubiquitous. They inform product design, shape customer journeys and provide the data on which leaders make multi-million-pound decisions.

Yet if those questionnaires are poorly designed, the insights they yield may be misleading. The cost is not simply wasted effort but strategic missteps that can damage organisations and alienate customers. This is why psychologists have always treated questionnaire design with such seriousness. It is not about being pedantic. It is about ensuring that when we measure, we truly measure, rather than fool ourselves with the illusion of data.

Artificial intelligence adds a further twist to this story. Tools are now available that can generate questionnaires instantly, using pre-set templates or creating bespoke items on the basis of a prompt. At first sight, this seems efficient. In reality, it risks amplifying the very problems we have been discussing.

AI can produce questionnaires that look polished and professional, but appearance is no guarantee of validity. If the underlying construct is poorly defined, if the sample is inappropriate, or if ethical safeguards are missing, the questionnaire will still be flawed. Moreover, the speed and scale of AI-generated surveys mean that flawed instruments can be deployed to thousands of people before anyone has paused to ask the fundamental question: what exactly are we measuring?

There is also the danger of an illusion of authority — that because AI produced the questionnaire, it must somehow be objective, when in fact it may simply be reproducing existing biases at scale. Psychologists, by contrast, are trained to slow down, to test, to refine, to question, and to uphold ethical standards. These disciplines matter more than ever in an age where anyone with access to an AI tool can launch a survey in seconds.

So the next time someone shares the results of a questionnaire, pause for a moment and ask: what actually is this questionnaire? What construct does it measure, how was it designed, what ethical safeguards were in place, and how reliable are its conclusions? And now, more than ever, add one more question: was this generated by AI? Because if it was, the need for rigour, scrutiny and ethical reflection is even greater. The temptation to treat quick, polished output as scientific insight is powerful — but without discipline, it can take us further away from truth rather than closer to it.

References

Press enter or click to view image in full size
Photo of two key research textbooks used in questionnaire design and UX research: Business Research Methods by Alan Bryman and Emma Bell, and Research Methods and Statistics in Psychology by Hugh Coolican.
Essential reading on research methods and questionnaire design: Business Research Methods (Bryman & Bell) and Research Methods and Statistics in Psychology (Coolican).

For readers who want to go deeper, two excellent starting points are:

  • Alan Bryman and Emma Bell, Business Research Methods
  • Hugh Coolican, Research Methods and Statistics in Psychology

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story: https://tinyurl.com/bootspub1

Simon Robinson
Simon Robinson

Written by Simon Robinson

Co-author of Deep Tech and the Amplified Organisation, Customer Experiences with Soul and Holonomics: Business Where People and Planet Matter. CEO of Holonomics

No responses yet