Make Surveys Great Again! – What Donald Trump Can Teach Us About How Not to do User Research

Ryo Mac
Psynamic
Published in
6 min readFeb 20, 2019

“How would you rate my speech? a) Historic, b) great, or c) good”

If this sounds like a joke, well… it basically is. On his show The Colbert Report, political satirist Stephen Colbert would famously ask many of his guests (especially those who were against President Bush at the time) a simple question:

“George Bush… great president, or the greatest president?”

The irony of a question with no negative options, however, seems to be lost on the current White House administration; considering the “Official State of the Union Approval Poll” they released immediately following Donald Trump’s live national teleprompter-reading on February 5th. Lucky for us, this survey provides us with an excellent example of what UX designers and researchers can do to improve, prevent, or avoid bad data collection practices.

1) Leading is Misleading

The concept of a “leading questions” is crucial to understand how to ensure sound research methodology. A leading question is simply a question that is written in such a way as to manipulate the participant into answering a certain way. Perhaps unsurprisingly, this survey is full of them. For instance, Question 7 asks:

7. Do you believe Democrats should celebrate our great success instead of obstructing it?

Before you even get to the answer, the question is implicitly stating there is “great success” to celebrate. Then, the question implies that the Democratic party members are obstructing said success without saying it explicitly, while the actual question is only about whether or not you think they should celebrate the alleged success. Beware of such false dichotomies, as they are good examples of questions to whom the answers can essentially give you no real value.

A good rule of thumb is to stay away from questions that begin with things like “Do you believe…?” or “Do you think…?” before injecting an opinion that you want confirmation on. You can always say “What are your thoughts on…” without giving any more context (unless it’s necessary to understand the question).

2) Jargon introduces ambiguity

People like to appear intelligent and informed, and this may influence the validity of your results. Consider the segment “Lie Witness News” from the TV show Jimmy Kimmel Live!, where unsuspecting bystanders are asked for their opinions on either made-up stories or current events with substantial details changed. Not only do the interviewees in this segment go along with the stories, some of them will tell vivid stories of where they were when they “heard the news,” whether it’s regarding a military intervention of Wakanda — the fictitious homeland from the superhero movie Black Panther — or the recent “victory” of the losing team at the 2019 Superbowl. Question 5 actually ends up fixing the problem of jargon by introducing another problem (becoming a leading question).

5. Do you believe President Trump properly addressed the real danger of MS-13 gang members pouring into our country?

While this is not occupational jargon, the term “MS-13” might not be known to all participants. In many cases, researchers may want to include a brief explanation above the question to ensure that participants are all on the same page. Luckily, the survey designers did a relatively good job of explaining it by saying “MS-13 gang members.”

I still think it’s certainly better to separate the question and the explanation, but depending on the context, this kind of contextual explanation suffices. The problem with this question, however, is that their explanation veers off into a leading question again, saying there’s a “real danger,” with a vivid description of what that danger is. Don’t allow your clarifications or explanation of jargon to become misleading questions.

3) Consistency & Editing

Stylistic choices can certainly be made with the type of language you use in the survey; however, it is important to stay consistent. For example, Question 2 oddly has “CRISIS” capitalized, almost as if to indicate an acronym. Presumably, this is actually merely used as emphasis, but without another example, it is hard to tell. The survey designers also chose to inconsistently capitalize the first letter of certain words. For example, the word “Greatness” is capitalized in Question 3. However, the capitalization in Question 6 is instead on the word “Harass,” but not “great.”

6. Do you believe Democrats only say they don’t want a wall to Harass our great President?

In this particular survey, the capital letters do not actually detract from participants’ understanding, but it is something to keep in mind if you are surveying participants on more complicated or technical matters. After all, even something as simple as a comma can completely change the meaning of a sentence. Therefore, rather than surveying tons of participants before realizing you have ambiguous wording, silly mistakes, or confusing inconsistencies, make sure you double-check your survey, and get someone to edit/check it afterwards.

4) Question Types & the Open-ended Question

There is inherently a problem with multiple-choice questions, which is that you constrain people’s ability to communicate. Sometimes such questions are both intentional and absolutely justified (it all depends on what you are researching); but in UX research, you generally want to know what the users are thinking. So why would you constrain them? The solution is simple: use open-ended, non-leading questions. If for whatever reason, you need something more quantitative data, you may want to consider more numerically ordered data or scaled data. The Likert scale is a common method, including options of various magnitudes to help establish the conviction of people’s opinions. For example: strongly agree, agree, neutral, disagree, or strongly disagree; or using numerical values: -3, -2, -1, 0, 1, 2, 3. But in UX design, you generally want open-ended question.

As it turns out, the final question in the survey is the only one of any actual validity, since it is neither a leading question, nor a constrained multiple-choice question intended to manipulate. The question simply asks if you have any “thoughts you’d like to share,” and then gives a text-box for you to write whatever you want. This is exactly the kind of thing researchers should be doing, because it allows participants to answer freely, and does not involve.

HOWEVER (see?… the inconsistency from tip #3 is a bit strange), even this question is inherently flawed. Not by the question itself, but by the rest of the survey. The survey designers were clearly trying to capitalize on the psychological concept known as priming. Priming refers to the phenomenon whereby our perception of something is influenced by something previously experienced (e.g., leading questions, in our case), without our conscious awareness of that influence. In fact, by the time the respondent has reached the final question, they have read through an entire narrative of the struggles of “our great President”. This narrative is: Donald Trump has been doing a great job, and he is trying to unite the country, especially since there are dangerous foreign gang members infiltrating the border, and he wants to build a wall to keep the country safe from this national crisis, but democrats are obstructing his progress, simply because they want to harass him. The respondent has been primed with this narrative, making the entire survey questionable at best.

The Bottom Line

These are the types of issues that UX researchers must be cognizant of, when designing surveys to administer to users. Don’t allow your own assumptions to bias/influence the wording of your questions, and stay away from questions with inherent premises. Stay consistent with whatever stylistic choices you decide on for your surveys, and avoid using jargon or language that may be ambiguous, unless you intend on explaining them. And a very important point: Open-ended questions are your friend. So basically, this is a perfect example of what not to do when conducting a survey.

Now, I know what you’re thinking… “Nobody knew surveys could be so complicated!” But if you do a very strong, tremendous job, then you’ll have a great relationship with the users. In fact, by eliminating bias in your survey, you may be able to prevent your respondents from giving you fake views.

--

--