Incorporating Mixed Methods: Bringing Quantitative Tools To Your Qualitative-Focused Team

Alex Sher
6 min readOct 8, 2019

--

This is part 3 of an ongoing series for qualitative research practitioners looking to incorporate quantitative methodology into their work. Part 1 covers When Can I Conduct Quantitative Research? Part 2 covers How Big Should My Sample Size Be?

In a recent project, my team was designing to encourage employees to fill out a form on the same day as the event it was documenting. However, we never gained consensus on why we assumed filling out the form same-day was desired. To better explore that assumption, we compared their monthly performance to the frequency of the form being completed in the ‘correct’ amount of time. This analysis revealed that there was no correlation between being a high performer and the form being submitted on time. Knowing this empowered our team to have more targeted conversations about our goals.

When it comes to buzzwords in the UX research space, mixed methods is all over the place. As ‘big data’ steps aside for ‘thick data’, teams are trying to combine qualitative and quantitative approaches to obtain more insight from data. But what does this look like in practice? And how can you get your qual-focused team effectively combining these approaches?

At Lextech, quantitative data makes our team faster and more focused. Before diving into 1:1 sessions, we often use a survey to get quick signal on what design direction might be best received by our user base.

Without using terms that require an Intro to Data Science course to understand, here are a few steps to help you start asking quantitative questions.

Align your team on your biggest unknowns and risks

What are you trying to learn as a team? Where might better understanding help drive you forward? Are there any assumptions your team operates under that might need to be evaluated further? Before thinking about what methods you plan to use, make sure you have an understanding of your team’s highest-priority learning goals.

Being purposeful towards what you want to learn will allow you to make smart decisions about which tools will be the best fit, instead of throwing quantitative spaghetti at the wall and seeing what sticks.

Make sure you bring the right people into this conversation to set your researchers up for success in the long and short term. Talk to your team about what questions need to be answered immediately, while also gaining an understanding of what areas you’d like to better understand over the next few months or more.

Know what data you have

It’s hard to get started without an understanding of what exists. Sit down with the data person on your team and go through what data you already have. Learn how it’s collected, how often, how it’s organized and why it’s organized that way, what data you can’t get, and what data you could reasonably expect to have. You might be surprised what questions pop up just by exploring this data.

Brainstorm questions

Review your learning goals and consider what data was available. Was there any data you’d like to further explore? Do you want to know if two variables might be connected or not? Can you collect new data and measure it against a desired or undesired outcome?

A few types of questions you can ask:
Q: Will this design decision make an impact on usage? Is one version ‘better’ than the other?
Example: You have an idea for a new visual treatment for a graph and want to see if it encourages interaction with the graph. You can A/B test by providing a different version to two groups of users and see what percentage of each of them interact with the graph to see if the proportions meaningfully differ.
Tip: When comparing 2 different treatments to see if they have an impact on a specific metric, MeasuringU has an easy calculator to analyze A/B testing here: https://measuringu.com/ab-cal/

Q: How do these two variables relate to each other?
Example: Your team always talks about how the lowest performers never open your app. Connecting performance data to usage data can help verify this assumption.
Tip: While there are various statistical tests that can help explore the relationship between two variables depending on the type of variables and what you’re hoping to learn, a quick view of a scatterplot can highlight the relationship without the need for any further analysis. Once you’ve opened the data in Excel, a scatterplot is just a few clicks away.

Q: How likely is it that the true average for this metric is in a certain range?
Example: Your goal for a new design was to enable users to complete a task within 2 minutes. Tracking how long it actually took over time and calculating a confidence interval at https://measuringu.com/ci-calc/ allows you to know if you met your goal or not.
Tip: This is a frequent method for usability tests, when prioritizing bugs to fix or determining if your design met its success metrics. If you want to see if a metric is most likely over a benchmark, MeasuringU also has a handy calculator for that here: https://measuringu.com/onep/

For all of these, start with the null hypotheses that a variable has no effect on the outcome. Khan Academy has a simple introductory video on how hypothesis testing here: https://www.khanacademy.org/math/statistics-probability/significance-tests-one-sample/idea-of-significance-tests/v/simple-hypothesis-testing

Consider Timeline

Prepare for the fact that certain quantitative analyses require different timelines to collect a proper sample set or wrangle the data properly. Cleaning data often takes longer than the analysis itself.

You should also check if you need to pull your data from a specific timeline to ensure it’s relevant to what you’re asking and prevent unrelated factors from affecting your data. On a recent project, I wanted to understand how an app’s usage metrics may have affected users’ sales performance. How long users spent in the app per week and what days of the week they opened the app were of particular interest. We had decided this data was of interest days after a major update. To prevent the buzz around the update from influencing our results, I waited a few weeks before pulling the data. To ensure you are learning what you actually what to learn, consider what external factors might affect the numbers when determining timeline.

Consider Privacy

If you’re dealing with sensitive information, make sure your team is handling it responsibly.

Prioritize Questions

Go through your list of questions and, if you see a few where quantitative exploration might be a good fit, move forward with them. Especially when this is your first dive into quantitative research, focus on a quick win. Finding a straightforward question that you know can add immediate value to your project will help you maintain more momentum than a longer endeavor. Take this time as an opportunity to dip your toe in; you’ll get more comfortable learning different methods the more you understand how quantitative analysis can provide insights for your work.

Moving Forward

When interpreting the results of your analysis, review the decisions made while determining your data. What are the shortcomings of the data you chose? Statistics lose accuracy when the sample you test is not representative of the population you’re studying. Revisit what claim you’re trying to make with the analysis and how well the metrics you chose connect to that claim and where there might be room for bias. To prevent misinterpretation of the statistics, keep these questions in mind when drawing conclusions from your data.

Lastly, remember that replication is key when it comes to statistical significance. Even if something is significant at a 95% confidence level, that still means that you’ll get a false positive one out of every 20 studies. If there are high risks to taking an incorrect action off of your conclusion, replicate the study with an independent data set.

When trying to learn these methods, you’re definitely not alone.
There are many great online tutorials on programming languages for data science to explore other methods. For example, a more in-depth explanation of hypotheses can be found here: https://machinelearningmastery.com/statistical-hypothesis-tests/

Congrats! You took your first step towards incorporating quantitative measures into your UXR practice. Happy data crunching!

--

--

Alex Sher

Designer, researcher and listener. Interested in ethical tech, mixed methods, and honest, clear communication. Exploring enterprise app development with Lextech