Future of Quantitative Research

Mani Pande
UXR-manipande
Published in
7 min readFeb 13, 2022
Unsplash

As a quant researcher with background in forecasting research, I am always looking for signals of change that can forecast the future. In the last 15 years, I have seen quant research evolve and change with the introduction of new tools like Qualtrics and SurveyMonkey that have made it easier to design surveys and conduct statistical analysis.

I gave a talk at UX360 Conference on this subject recently and want to share my thoughts, but before I get into how quant research is evolving, let me dive into some principles of forecasting.

“The future is already Here. It’s just not evenly distributed,” William Gibson, Science Fiction Writer.

This quote beautifully captures the basic principle of forecasting research if you would like to forecast the future, you should look around and look for signals that point to what to expect in the future. As a forecaster one looks for signals in the present world to be able to forecast the future.

But what is a signal?

A signal is a small or local innovation or disruptions that has the potential to grow in scale and geographic distribution. It can be many things: it can be an innovation, a new product, an event, a local trend or an organization.

Source: IFTF

In short, it is something that catches our attention at one scale and in one locale and points to larger implications for other locales or even globally.

For example, when I worked at IFTF in 2008 we interviewed a woman who was life casting on now defunct Justin TV that allowed users to broadcast video. She was sharing her life 24/7. Her channel had hundreds of regular viewers who not only watched her life but also interacted with each other and with her through a chat box on her life streaming home page.

We saw this as an early signal of a future where anyone would be able to create and share videos. we came up with a forecast that in the future as video technology becomes ubiquitous, cheaper and easy to use, there will be more and more streaming platforms where everyone will be the star of their own show. We have seen this become a reality from individuals becoming influences on TikTok or Instagram to YouTube breakout stars like Marques Brown Lee popularly known was MKBHD who reviews technology to HBO’s break-out star Issa Rae who started her career on YouTube in her hit series Awkward Black Girl before moving to HBO for her hit series Insecure.

As a quant researcher with background in forecasting research, I have been seeing several signals of how survey and statistical research is becoming more and more deskilled due to automation or development of new survey tools like Qualtrics and SurveyMonkey.

So what are the signals of change that I have seen.

Most of you probably know the difference between a T-test and Chi-square tests. T-test is used to compare means of two populations and is based on the assumption of normality and Chi-square allows researchers to test whether or not not there is a statistically significant association between two categorical variables. Chi square is also focused on whether observed frequencies differ from expected frequencies and doesn’t rely on computing the arithmetic mean or variance. It should be used when the variables we are studying are categorical.

A few years one of the researchers who used to work with me and didn’t have an extensive background in statistics decided to up level her analysis beyond reporting frequency distribution. She used Excel’s advanced capabilities to run a T-test and she was able to run the analysis, and the program did throw out some results. She came to me to help her interpret the results. I pointed out to her that she should have run a Chi-square test for independence since the variables were categorical and in fact the results didn’t make sense since the variables were not identified correctly.

It was a-ha movement for me. I thought wow you can run the analysis without understanding the nuances of the tests and if it wasn’t for me the researcher would have reported the results without understanding that the results were not correct.

Here is another example.

The other day I was speaking with another UX research colleague and we were discussing differences between market research and UX research. I pointed out to her that market research specializes in doing pricing research. She told me that her team had conducted pricing research although no one had any background in market research. A team member with background in quant research conducted conjoint analysis since it’s available in Qualtrics. Qualtrics provides a lot of instructions or hand holding on how to do it.

I was surprised and shocked at the same time because the last time I did pricing research despite having a deep background in stats, I hired a team of statisticians to help with conjoint analysis.

But here we are today where you can use a tool like Qualtrics to run a complex analysis without the researcher having in-depth knowledge about the method.

Similarly, few years back my team was working on a brand survey and the researchers used templates in Survey Monkey to create the brand survey without any prior knowledge of how market researchers have historically measured different dimensions of a company’s brand.

What are all these signals telling us? There is clearly a trend towards deskilling, automation and templatization and some might even argue towards democratization of quant research. It’s easier to write surveys because there are templates that researchers can use. In addition, tools like Survey Monkey and Qualtrics make it easy for researchers without deep experience in statistics to use techniques that they have not used previously.

I will also argue that this is a positive trend and not a negative trend since more and more researchers are being empowered to do quant research that they could not have done previously. It’s always better when researchers know more techniques and use the right one that helps them answer the business questions on hand.

But this does come at a price. Being in the weeds and getting practice is what provides a deep understanding of the data/methodology and that seems to be getting lost in this automation. So basically it’s a tradeoff; more accessibility, more shallow understanding.

So what is needed from researchers to use these powerful tools more effectively.

It’s back to the basics. Basic statistical skills matter even more. Before powerful tools like Qualtrics and SurveyMonkey if you didn’t have statistical knowledge, you could not run analysis. But these tools have empowered those who don’t have deep knowledge to be able to the analysis. This creates a lot of room for error. As you know in research we have a famous saying Garbage In and Garbage Out, the probability of choosing to do not the right type of analysis is much higher now.

It’s important to understand basic level statistics to ensure that researchers don’t make mistakes. My advice to everyone is to take an upper level statistics class in undergrad and that should set you up for success. Obviously Coursera, Udacity etc also have several classes that teach you basic statistics. A class which covers basic issues about sampling, simple statistical analysis like correlation, T-test etc.

Many researchers that I have worked with went to some excellent HCI programs in the US, but none teach basic stats although many graduates from these programs become UX researchers and not designers.

My advice to those who are currently in an HCI program or plan to do a Masters in HCI is so take an upper level class in basic statistics in Dept of Sociology or Psychology if you want to be a UXR. This will ensure that you you are choosing the optimal method. For example in some situations a pure ranking question would answer your question as well as a Max Diff which requires a much larger sampling. Even through our tools can guide with the analysis, it’s imperative for researchers to know what is the best technique for the data and question at hand.

Learn Basic survey writing: I would encourage everyone to also take a basic class that teaches survey writing since it’s important for researchers to know how to craft survey questions — what types of scales to use, should it be a 5 point or 7 point scale, should the scale be labeled, how to avoid common survey mistakes like double-barreled questions, avoiding double negatives and biased questions.

Yes you can rely on the templates that tools provide you, but these templates might not work for the kind of business questions that you need to answer. Therefore it’s important to know what are the common pitfalls that you should avoid.

Learn sampling strategy — It’s important to figure out sampling so you know what should be the optimal sample for your research. Don’t go only by rule of thumb or intuition. Remember tools at best can make recommendations, but they don’t know your research goals, what kind of analysis are you planning, are you going to be comparing sub-samples which will influence your sub-sampling strategy and impact the total sample size also.

Before you start writing your survey, come up with a sampling strategy, be explicit about what is the margin of error that you are comfortable with and working backwards from it come up with a sample size. If you are going to compare sub-groups have a sub-sampling strategy.

I have seen reports where researchers have shown definite relationships with small sample sizes, this leads to a false sense of confidence when all the data is telling you is that the relationship is more directional than stat sig. This is a very common mistake and the tools can’t help you here.

Overall, I believe that it’s a positive trend that tools are empowering more researchers to be able to do quant analysis. But it’s on us as researchers to ensure that we up level our basic survey, sampling and statistical analysis skills to fully leverage these tools.

Here is the video of the talk also:

--

--