How user memory may be impacting your survey data: Cognitive psychology considerations in UX survey design

Ashley Gries
Cisco Cloud Security Design
7 min readDec 18, 2019

My journey to becoming a user experience researcher (UXR) has been unconventional, to say the least.

Ever since I was 16, all I ever wanted to do was to help people and I was certain that my calling was in mental health. I had followed all the steps one might follow to become a clinical therapist, up to earning a Master’s degree in the field, taking on clients and earning roughly 700 pre-licensure hours. It wasn’t until I was nearing the end of my clinical practicum that I came upon an impasse.

I realized that once I graduated from my program and entered my internship status I would be expected to work for free for the next 2 years while I earned the remainder of my 3000 clinic hours. With no social or familial support system, this was seemingly an impossible feat. I feared the possibility that I’d need to earn my hours at top speed to minimize the amount of time I spent unemployed. Perhaps take out loans? Start a GoFundMe? The more realistic option was to work a full-time job on top of a full caseload and bite the bullet with an 80 hour work week, or work part-time and miss a couple of meals.

I came to realize: the mental health profession is not flexibly designed for those earning their license to be able to take care of themselves. How can we expect them to take care of others?

woman with her head down on a desk riding a train

I decided at that point, for my own mental and physical health, I was going to have to pivot. This provided for an amazing opportunity where I got to reevaluate all of the things that made me fall in love with psychology as a field in the first place. Things like:

  • The ability to work humanistically
  • Investigating collaboratively and making meaning from seemingly disjointed narratives
  • Being an empathic observer and practitioner
  • Feeling encouraged to approach things ‘trans-diagnostically’ from different perspectives
  • Generally just learning about and working with people
  • The journey of continuous applied learning

It was through this reevaluation I found a deep and passionate love for the research world and found myself working in a Stanford neuropsychology lab, elbows deep in the hyper-competitive realm of academia. My eventual home at Cisco Umbrella came through a desire to further integrate these two passions, which I was able to coalesce in UX research.

Research in industry =/= research in academia

Something I have had to (and continue to) actively resolve is the profound difference of standards and goals in research design and implementation between academia and enterprise business product research. This is particularly evident when considering the overuse and overgeneralization of surveys.

The graduate student in me was audibly concerned when I began working with teams that didn’t seem aware of the disparity between time and energy allocated to making major product development decisions and actual research design, significance testing or analysis.

Throughout my time working in tech, I have learned that this imbalance in resource allocation to research is due to business strategy analysis typically superseding the needs of specific customer segments. As a result, when it came time to do “people research”, there tended to be this major skew towards the over-utilization of surveys as the primary method of data collection. This is because they provide the opportunity for researchers to obtain information from a large sample of individuals of interest relatively quickly.

This isn’t necessarily a bad thing, as surveys allow for a lot of flexibility in design. However, like all research, surveys have error considerations and it is up to more advanced practitioners to use their awareness of them to better implement strategies for error reduction.

So what are the overall goals in survey research? How does this apply to researching the user experience?

In a typical survey, respondents will be asked to self-report different thoughts, attitudes, and beliefs regarding their experience, in our case, with a product. Surveys are also typically deployed via email or on paper to be completed within a designated time frame. Respondents will likely have a commonality that makes them the right fit to be completing your survey and your sample should be considered in the overall research design.

As user experience researchers, our goal is to understand the experience of the user — and as such we should be sensitive to (or at least aware of) the impact of dispositional and contextual factors on human thought and behavior.

Many psychologists and social scientists conceptualize ‘attitudes’ as “enduring dispositions that are expressed by evaluating a particular entity with some degree of favor or disfavor”. From this perspective, contextual influences on respondent self-report are problematic, noisy and are believed to cloud a respondent’s “true” thoughts and attitudes. An alternative approach treats attitudes as “evaluative judgments that are formed on the spot, based on whatever information is accessible at that point in time”. From this perspective, only context-sensitive evaluation is relevant and able to guide behavior in adaptive ways. In our case, it would be to encourage customers to provide us with the right information to improve upon their experiences.

A neon visual of the human body and brain with text that says “it’s inside us all”
Photo by Bret Kavanaugh on Unsplash

With all of this in mind, the most important considerations in survey research are focused on how we might design the right survey to capture how things are for our respondents at a specific point in time.

One of the major challenges with survey validity is the time it takes from a product encounter/experience to a user being asked for feedback. The wider the gap, the less reliable the responses are. Additionally, the wider this gap, the more important it becomes to make sure we ask the right questions to elicit responses that are honest indications of attitudes towards our product. Surveyors must ensure that their questions are 1. written as such that they are interpreted in their intended way and 2. are designed with the right cues to evoke distinct and relevant memories

How Might We: use our knowledge of human memory to measure a “truer” user experience?

So, if the primary goal for user experience research is to understand attitudes, perspectives and motivations of the user — how can researchers leverage survey questions to access this enhanced memory recall and perhaps get a “truer” measure of the user experience? If we’re asking questions to better understand attitudes, the survey respondent may answer either by accessing a previously formed attitude judgment using direct retrieval, or by forming a judgment on the spot based on whatever relevant information is accessible at that point in time.

Alternatively, how might we lessen this gap between encounter and self-report to gain more reliable respondent data and place less pressure on the cognitive load needed to interpret questions? We attempted to answer this question by deploying two surveys:

  1. One survey was deployed in-product using a message chat bubble and funneling users into an online survey tool. The chat bubble was shown to all customers currently using the product with a prompt to complete the survey if they had time upon receiving the message.
  2. The other survey was emailed via distribution list with email reminders sent 3x over 3 weeks.

The surveys differed in design, but both asked the NPS question at the very end and were meant to reach the same target demographics.

Two men standing at a table talking to each other
Photo by Nik MacMillan on Unsplash

While the qualitative data we gathered seemed to trend in similar directions, the quantitative feedback we received (NPS score) varied drastically. So now our team was met with a new problem — which NPS score can we trust more?

Most of what we remember as humans is by direct retrieval — where items of information are linked directly to a cue or question. Think about this in contrast to something like a sequential scan a computer might use to systematically search through the entire contents of its memory until a match is found.

A cartoon man fishing in a lake that is characterized to look like a brain

Retrieval cues can help to facilitate recall and are considered to be most effective when they have a strong link with the information to be recalled. Memory recall also appears to be state-dependent to some extent and it is widely known that at the bare minimum, it is enhanced when encoding (making the memory) and retrieval (accessing the memory) occur in the same state. This same enhanced memory recall phenomenon has been studied in particular as it pertains to emotional state.

Individuals tend to retrieve information more easily when it has the same emotional content as their current emotional state AND when the emotional state at the time of retrieval as the same at the time of memory encoding.

Final thoughts

Using this information to frame our respondent’s context, and if ‘state’ is an indicator of stronger response reliability, it is my assertion based on how memories are retrieved that surveying in product offers a more qualified account of the customer experience.

Data collected by researchers via survey still have merit, even if respondents were not surveyed immediately after their experience. Additionally, there is a delicate balance between trying to get accurate data and annoying your customers. This balance is likely to tip the scale when deploying in-product messages, especially when they contain requests to complete surveys.

--

--

Ashley Gries
Cisco Cloud Security Design

Mental health practitioner turned UX researcher — just trying to navigate the tech jungle with my dog.