Panel Power: Maximizing Data Quality When Using Opt-In Panels

Bully Pulpit International
Bully Pulpit International
3 min readMar 21, 2024

by Sheldonn Plummer-Duff

A very interesting article from our colleagues at Pew has been making the rounds highlighting the differences in opt-in survey panel research versus panels with probability-based recruitment. Specifically, they found data from opt-in panels may have unintentionally overstated views on sensitive issues due to the presence of “bogus respondents”(e.g. those who are inattentive or providing fraudulent answers).

Like other researchers, BPI uses opt-in panels as one of our many mode options and has done a lot of internal research on what drives the most effective and representative surveys. We also understand the benefits of using this type of recruitment from a cost perspective and use multiple methods to ensure we deliver quality data no matter the mode.

A few things we’ve found essential to getting the highest data quality from opt-in panel sources:

  • Multiple layers of data quality checks are key to removing respondents who are giving bogus answers or aren’t paying attention and there’s no single, silver bullet to finding them. Based on our testing, we use a combination of questionnaire design and metadata including — speed and straight-lining checks, open-ended questions, reCAPTCHA / bot detection, cluster detection, conflicting answers, and other industry standards to detect and remove these types of respondents.
  • Even with these layers in place, some panels just don’t meet our quality standards — and we don’t use them. Panels that over-survey people, use less reliable recruitment practices, or don’t have sufficient data quality checks in place to prevent fraud and monitor their population’s demographic changes are vetted and removed as potential partners. Making sure you are up to date with all the processes panel partners are using to address these issues is more important than ever as technology and attention spans evolve.
  • Evaluating panel quality also highlights the need for your data quality checks to be able to tell the difference between fraudulent and inattentive respondents. One could reflect issues with your questionnaire design, and the other can be operational with the panel’s sampling and sources itself being the problem.
  • Using multiple modes is important for conducting representative surveys. Years ago, you could efficiently conduct a phone poll as a “gold standard” opinion survey — but that’s no longer feasible for both cost and representativeness, so a balance is needed. While we do use panels to gather survey responses — we also use other modes like calls or text-to-web surveys. Comparing answers and demographics across these modes allows us to understand if and where differences are coming from and make fielding and weighting adjustments as needed. See linked some key takeaways on mode differences we presented at AAPOR 2023.

Beyond all of the above, we work to tie our survey work back to first-party audience data. Investments in technology and processes allow us to target the right people with multi-modal surveys as well as ensure we have a representative sample, from getting the right balance on partisanship to other key variables like industry or job role.

It’s exciting to see how the research industry is changing and how we can tackle these new challenges that arise. It also continues to highlight the need for a flexible approach that balances the strengths and weaknesses of different modes, including opt-in panels.

--

--